00:00:00.001 Started by upstream project "autotest-per-patch" build number 121028 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.100 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.101 The recommended git tool is: git 00:00:00.101 using credential 00000000-0000-0000-0000-000000000002 00:00:00.103 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.148 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.180 Using shallow fetch with depth 1 00:00:00.180 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.180 > git --version # timeout=10 00:00:00.197 > git --version # 'git version 2.39.2' 00:00:00.197 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.197 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.197 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.501 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.514 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.527 Checking out Revision 6e1fadd1eee50389429f9abb33dde5face8ca717 (FETCH_HEAD) 00:00:04.527 > git config core.sparsecheckout # timeout=10 00:00:04.540 > git read-tree -mu HEAD # timeout=10 00:00:04.558 > git checkout -f 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=5 00:00:04.576 Commit message: "pool: attach build logs for failed merge builds" 00:00:04.576 > git rev-list --no-walk 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=10 00:00:04.674 [Pipeline] Start of Pipeline 00:00:04.687 [Pipeline] library 00:00:04.689 Loading library shm_lib@master 00:00:04.689 Library shm_lib@master is cached. Copying from home. 00:00:04.707 [Pipeline] node 00:00:04.722 Running on VM-host-WFP7 in /var/jenkins/workspace/nvme-vg-autotest 00:00:04.727 [Pipeline] { 00:00:04.737 [Pipeline] catchError 00:00:04.738 [Pipeline] { 00:00:04.748 [Pipeline] wrap 00:00:04.754 [Pipeline] { 00:00:04.759 [Pipeline] stage 00:00:04.761 [Pipeline] { (Prologue) 00:00:04.776 [Pipeline] echo 00:00:04.777 Node: VM-host-WFP7 00:00:04.781 [Pipeline] cleanWs 00:00:04.787 [WS-CLEANUP] Deleting project workspace... 00:00:04.787 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.792 [WS-CLEANUP] done 00:00:04.967 [Pipeline] setCustomBuildProperty 00:00:05.021 [Pipeline] nodesByLabel 00:00:05.023 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.031 [Pipeline] httpRequest 00:00:05.035 HttpMethod: GET 00:00:05.035 URL: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:05.035 Sending request to url: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:05.037 Response Code: HTTP/1.1 200 OK 00:00:05.037 Success: Status code 200 is in the accepted range: 200,404 00:00:05.038 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:06.203 [Pipeline] sh 00:00:06.488 + tar --no-same-owner -xf jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:06.508 [Pipeline] httpRequest 00:00:06.513 HttpMethod: GET 00:00:06.513 URL: http://10.211.164.96/packages/spdk_dd57ed3e88dcafd6e7188cca8ba5f8d9254a85a1.tar.gz 00:00:06.514 Sending request to url: http://10.211.164.96/packages/spdk_dd57ed3e88dcafd6e7188cca8ba5f8d9254a85a1.tar.gz 00:00:06.529 Response Code: HTTP/1.1 200 OK 00:00:06.529 Success: Status code 200 is in the accepted range: 200,404 00:00:06.530 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_dd57ed3e88dcafd6e7188cca8ba5f8d9254a85a1.tar.gz 00:00:58.878 [Pipeline] sh 00:00:59.161 + tar --no-same-owner -xf spdk_dd57ed3e88dcafd6e7188cca8ba5f8d9254a85a1.tar.gz 00:01:01.714 [Pipeline] sh 00:01:01.996 + git -C spdk log --oneline -n5 00:01:01.996 dd57ed3e8 sma: add listener check on vfio device creation 00:01:01.996 d36d2b7e8 doc: mark adrfam as optional 00:01:01.996 129e6ba3b test/nvmf: add missing remove listener discovery 00:01:01.996 38dca48f0 libvfio-user: update submodule to point to `spdk` branch 00:01:01.996 7a71abf69 fuzz/llvm_vfio_fuzz: limit length of generated data to `bytes_per_cmd` 00:01:02.011 [Pipeline] writeFile 00:01:02.023 [Pipeline] sh 00:01:02.302 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:02.311 [Pipeline] sh 00:01:02.587 + cat autorun-spdk.conf 00:01:02.587 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.587 SPDK_TEST_NVME=1 00:01:02.587 SPDK_TEST_FTL=1 00:01:02.587 SPDK_TEST_ISAL=1 00:01:02.587 SPDK_RUN_ASAN=1 00:01:02.587 SPDK_RUN_UBSAN=1 00:01:02.587 SPDK_TEST_XNVME=1 00:01:02.587 SPDK_TEST_NVME_FDP=1 00:01:02.587 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:02.591 RUN_NIGHTLY=0 00:01:02.592 [Pipeline] } 00:01:02.602 [Pipeline] // stage 00:01:02.612 [Pipeline] stage 00:01:02.614 [Pipeline] { (Run VM) 00:01:02.623 [Pipeline] sh 00:01:02.893 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:02.893 + echo 'Start stage prepare_nvme.sh' 00:01:02.893 Start stage prepare_nvme.sh 00:01:02.893 + [[ -n 4 ]] 00:01:02.893 + disk_prefix=ex4 00:01:02.893 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:02.893 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:02.893 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:02.893 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:02.893 ++ SPDK_TEST_NVME=1 00:01:02.893 ++ SPDK_TEST_FTL=1 00:01:02.893 ++ SPDK_TEST_ISAL=1 00:01:02.893 ++ SPDK_RUN_ASAN=1 00:01:02.893 ++ SPDK_RUN_UBSAN=1 00:01:02.893 ++ SPDK_TEST_XNVME=1 00:01:02.893 ++ SPDK_TEST_NVME_FDP=1 00:01:02.893 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:02.893 ++ RUN_NIGHTLY=0 00:01:02.893 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:02.893 + nvme_files=() 00:01:02.893 + declare -A nvme_files 00:01:02.893 + backend_dir=/var/lib/libvirt/images/backends 00:01:02.893 + nvme_files['nvme.img']=5G 00:01:02.893 + nvme_files['nvme-cmb.img']=5G 00:01:02.893 + nvme_files['nvme-multi0.img']=4G 00:01:02.893 + nvme_files['nvme-multi1.img']=4G 00:01:02.893 + nvme_files['nvme-multi2.img']=4G 00:01:02.893 + nvme_files['nvme-openstack.img']=8G 00:01:02.893 + nvme_files['nvme-zns.img']=5G 00:01:02.893 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:02.893 + (( SPDK_TEST_FTL == 1 )) 00:01:02.893 + nvme_files["nvme-ftl.img"]=6G 00:01:02.893 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:02.893 + nvme_files["nvme-fdp.img"]=1G 00:01:02.893 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:02.893 + for nvme in "${!nvme_files[@]}" 00:01:02.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:02.893 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:02.893 + for nvme in "${!nvme_files[@]}" 00:01:02.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:01:02.893 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:02.893 + for nvme in "${!nvme_files[@]}" 00:01:02.893 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:03.459 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:03.459 + for nvme in "${!nvme_files[@]}" 00:01:03.459 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:03.459 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:03.459 + for nvme in "${!nvme_files[@]}" 00:01:03.459 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:03.717 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:03.717 + for nvme in "${!nvme_files[@]}" 00:01:03.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:03.717 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:03.717 + for nvme in "${!nvme_files[@]}" 00:01:03.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:03.717 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:03.717 + for nvme in "${!nvme_files[@]}" 00:01:03.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:01:03.717 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:03.717 + for nvme in "${!nvme_files[@]}" 00:01:03.717 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:04.655 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.655 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:04.655 + echo 'End stage prepare_nvme.sh' 00:01:04.655 End stage prepare_nvme.sh 00:01:04.667 [Pipeline] sh 00:01:04.948 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:04.948 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:04.948 00:01:04.948 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:04.948 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:04.948 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:04.948 HELP=0 00:01:04.948 DRY_RUN=0 00:01:04.948 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:01:04.948 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:04.948 NVME_AUTO_CREATE=0 00:01:04.948 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:01:04.948 NVME_CMB=,,,, 00:01:04.948 NVME_PMR=,,,, 00:01:04.948 NVME_ZNS=,,,, 00:01:04.948 NVME_MS=true,,,, 00:01:04.948 NVME_FDP=,,,on, 00:01:04.948 SPDK_VAGRANT_DISTRO=fedora38 00:01:04.948 SPDK_VAGRANT_VMCPU=10 00:01:04.948 SPDK_VAGRANT_VMRAM=12288 00:01:04.948 SPDK_VAGRANT_PROVIDER=libvirt 00:01:04.948 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:04.948 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:04.948 SPDK_OPENSTACK_NETWORK=0 00:01:04.948 VAGRANT_PACKAGE_BOX=0 00:01:04.948 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:04.948 FORCE_DISTRO=true 00:01:04.948 VAGRANT_BOX_VERSION= 00:01:04.948 EXTRA_VAGRANTFILES= 00:01:04.948 NIC_MODEL=virtio 00:01:04.948 00:01:04.948 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:04.948 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:07.479 Bringing machine 'default' up with 'libvirt' provider... 00:01:08.412 ==> default: Creating image (snapshot of base box volume). 00:01:08.412 ==> default: Creating domain with the following settings... 00:01:08.412 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713986133_71f008a0b89efb795782 00:01:08.412 ==> default: -- Domain type: kvm 00:01:08.412 ==> default: -- Cpus: 10 00:01:08.412 ==> default: -- Feature: acpi 00:01:08.412 ==> default: -- Feature: apic 00:01:08.412 ==> default: -- Feature: pae 00:01:08.412 ==> default: -- Memory: 12288M 00:01:08.412 ==> default: -- Memory Backing: hugepages: 00:01:08.412 ==> default: -- Management MAC: 00:01:08.412 ==> default: -- Loader: 00:01:08.412 ==> default: -- Nvram: 00:01:08.412 ==> default: -- Base box: spdk/fedora38 00:01:08.412 ==> default: -- Storage pool: default 00:01:08.412 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713986133_71f008a0b89efb795782.img (20G) 00:01:08.412 ==> default: -- Volume Cache: default 00:01:08.412 ==> default: -- Kernel: 00:01:08.412 ==> default: -- Initrd: 00:01:08.412 ==> default: -- Graphics Type: vnc 00:01:08.412 ==> default: -- Graphics Port: -1 00:01:08.412 ==> default: -- Graphics IP: 127.0.0.1 00:01:08.412 ==> default: -- Graphics Password: Not defined 00:01:08.412 ==> default: -- Video Type: cirrus 00:01:08.412 ==> default: -- Video VRAM: 9216 00:01:08.412 ==> default: -- Sound Type: 00:01:08.412 ==> default: -- Keymap: en-us 00:01:08.412 ==> default: -- TPM Path: 00:01:08.412 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:08.412 ==> default: -- Command line args: 00:01:08.412 ==> default: -> value=-device, 00:01:08.412 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:08.412 ==> default: -> value=-drive, 00:01:08.412 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:08.412 ==> default: -> value=-device, 00:01:08.412 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:08.412 ==> default: -> value=-device, 00:01:08.412 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:08.412 ==> default: -> value=-drive, 00:01:08.412 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:01:08.412 ==> default: -> value=-device, 00:01:08.412 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.412 ==> default: -> value=-device, 00:01:08.412 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:08.412 ==> default: -> value=-drive, 00:01:08.412 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:08.412 ==> default: -> value=-device, 00:01:08.412 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.412 ==> default: -> value=-drive, 00:01:08.412 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:08.412 ==> default: -> value=-device, 00:01:08.412 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.412 ==> default: -> value=-drive, 00:01:08.412 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:08.412 ==> default: -> value=-device, 00:01:08.412 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.412 ==> default: -> value=-device, 00:01:08.412 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:08.412 ==> default: -> value=-device, 00:01:08.412 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:08.412 ==> default: -> value=-drive, 00:01:08.412 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:08.412 ==> default: -> value=-device, 00:01:08.413 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.669 ==> default: Creating shared folders metadata... 00:01:08.669 ==> default: Starting domain. 00:01:10.147 ==> default: Waiting for domain to get an IP address... 00:01:28.257 ==> default: Waiting for SSH to become available... 00:01:28.257 ==> default: Configuring and enabling network interfaces... 00:01:33.529 default: SSH address: 192.168.121.198:22 00:01:33.529 default: SSH username: vagrant 00:01:33.529 default: SSH auth method: private key 00:01:36.068 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:44.223 ==> default: Mounting SSHFS shared folder... 00:01:46.127 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:46.127 ==> default: Checking Mount.. 00:01:47.529 ==> default: Folder Successfully Mounted! 00:01:47.529 ==> default: Running provisioner: file... 00:01:48.532 default: ~/.gitconfig => .gitconfig 00:01:49.100 00:01:49.100 SUCCESS! 00:01:49.100 00:01:49.100 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:49.100 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:49.100 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:49.100 00:01:49.111 [Pipeline] } 00:01:49.130 [Pipeline] // stage 00:01:49.137 [Pipeline] dir 00:01:49.138 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:01:49.139 [Pipeline] { 00:01:49.152 [Pipeline] catchError 00:01:49.154 [Pipeline] { 00:01:49.167 [Pipeline] sh 00:01:49.449 + vagrant ssh-config --host vagrant 00:01:49.449 + sed -ne /^Host/,$p 00:01:49.449 + tee ssh_conf 00:01:52.737 Host vagrant 00:01:52.737 HostName 192.168.121.198 00:01:52.737 User vagrant 00:01:52.737 Port 22 00:01:52.737 UserKnownHostsFile /dev/null 00:01:52.737 StrictHostKeyChecking no 00:01:52.737 PasswordAuthentication no 00:01:52.737 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:01:52.737 IdentitiesOnly yes 00:01:52.737 LogLevel FATAL 00:01:52.737 ForwardAgent yes 00:01:52.737 ForwardX11 yes 00:01:52.737 00:01:52.750 [Pipeline] withEnv 00:01:52.752 [Pipeline] { 00:01:52.768 [Pipeline] sh 00:01:53.054 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:53.054 source /etc/os-release 00:01:53.054 [[ -e /image.version ]] && img=$(< /image.version) 00:01:53.054 # Minimal, systemd-like check. 00:01:53.054 if [[ -e /.dockerenv ]]; then 00:01:53.054 # Clear garbage from the node's name: 00:01:53.054 # agt-er_autotest_547-896 -> autotest_547-896 00:01:53.054 # $HOSTNAME is the actual container id 00:01:53.054 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:53.054 if mountpoint -q /etc/hostname; then 00:01:53.054 # We can assume this is a mount from a host where container is running, 00:01:53.054 # so fetch its hostname to easily identify the target swarm worker. 00:01:53.054 container="$(< /etc/hostname) ($agent)" 00:01:53.054 else 00:01:53.054 # Fallback 00:01:53.054 container=$agent 00:01:53.054 fi 00:01:53.054 fi 00:01:53.054 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:53.054 00:01:53.322 [Pipeline] } 00:01:53.340 [Pipeline] // withEnv 00:01:53.348 [Pipeline] setCustomBuildProperty 00:01:53.359 [Pipeline] stage 00:01:53.361 [Pipeline] { (Tests) 00:01:53.381 [Pipeline] sh 00:01:53.658 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:53.930 [Pipeline] timeout 00:01:53.930 Timeout set to expire in 40 min 00:01:53.932 [Pipeline] { 00:01:53.948 [Pipeline] sh 00:01:54.227 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:54.795 HEAD is now at dd57ed3e8 sma: add listener check on vfio device creation 00:01:54.807 [Pipeline] sh 00:01:55.087 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:55.362 [Pipeline] sh 00:01:55.644 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:55.919 [Pipeline] sh 00:01:56.202 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:01:56.467 ++ readlink -f spdk_repo 00:01:56.467 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:56.467 + [[ -n /home/vagrant/spdk_repo ]] 00:01:56.467 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:56.467 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:56.467 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:56.467 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:56.467 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:56.467 + cd /home/vagrant/spdk_repo 00:01:56.467 + source /etc/os-release 00:01:56.467 ++ NAME='Fedora Linux' 00:01:56.467 ++ VERSION='38 (Cloud Edition)' 00:01:56.467 ++ ID=fedora 00:01:56.467 ++ VERSION_ID=38 00:01:56.467 ++ VERSION_CODENAME= 00:01:56.467 ++ PLATFORM_ID=platform:f38 00:01:56.467 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:56.467 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:56.467 ++ LOGO=fedora-logo-icon 00:01:56.467 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:56.467 ++ HOME_URL=https://fedoraproject.org/ 00:01:56.467 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:56.467 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:56.467 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:56.467 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:56.467 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:56.467 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:56.467 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:56.467 ++ SUPPORT_END=2024-05-14 00:01:56.467 ++ VARIANT='Cloud Edition' 00:01:56.467 ++ VARIANT_ID=cloud 00:01:56.467 + uname -a 00:01:56.467 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:56.467 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:57.054 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:57.313 Hugepages 00:01:57.313 node hugesize free / total 00:01:57.313 node0 1048576kB 0 / 0 00:01:57.313 node0 2048kB 0 / 0 00:01:57.313 00:01:57.313 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:57.313 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:57.313 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:57.313 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:57.313 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:57.313 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:57.313 + rm -f /tmp/spdk-ld-path 00:01:57.313 + source autorun-spdk.conf 00:01:57.313 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.313 ++ SPDK_TEST_NVME=1 00:01:57.313 ++ SPDK_TEST_FTL=1 00:01:57.313 ++ SPDK_TEST_ISAL=1 00:01:57.313 ++ SPDK_RUN_ASAN=1 00:01:57.313 ++ SPDK_RUN_UBSAN=1 00:01:57.313 ++ SPDK_TEST_XNVME=1 00:01:57.313 ++ SPDK_TEST_NVME_FDP=1 00:01:57.313 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.313 ++ RUN_NIGHTLY=0 00:01:57.313 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:57.313 + [[ -n '' ]] 00:01:57.313 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:57.313 + for M in /var/spdk/build-*-manifest.txt 00:01:57.313 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:57.313 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:57.579 + for M in /var/spdk/build-*-manifest.txt 00:01:57.579 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:57.579 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:57.579 ++ uname 00:01:57.579 + [[ Linux == \L\i\n\u\x ]] 00:01:57.579 + sudo dmesg -T 00:01:57.579 + sudo dmesg --clear 00:01:57.579 + dmesg_pid=5349 00:01:57.579 + [[ Fedora Linux == FreeBSD ]] 00:01:57.579 + sudo dmesg -Tw 00:01:57.579 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.579 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.579 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:57.579 + [[ -x /usr/src/fio-static/fio ]] 00:01:57.579 + export FIO_BIN=/usr/src/fio-static/fio 00:01:57.579 + FIO_BIN=/usr/src/fio-static/fio 00:01:57.579 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:57.579 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:57.579 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:57.579 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.579 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:57.579 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:57.579 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.579 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:57.579 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:57.579 Test configuration: 00:01:57.579 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.579 SPDK_TEST_NVME=1 00:01:57.579 SPDK_TEST_FTL=1 00:01:57.579 SPDK_TEST_ISAL=1 00:01:57.579 SPDK_RUN_ASAN=1 00:01:57.579 SPDK_RUN_UBSAN=1 00:01:57.579 SPDK_TEST_XNVME=1 00:01:57.579 SPDK_TEST_NVME_FDP=1 00:01:57.579 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.579 RUN_NIGHTLY=0 19:16:23 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:57.579 19:16:23 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:57.579 19:16:23 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.579 19:16:23 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:57.579 19:16:23 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.579 19:16:23 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.580 19:16:23 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.580 19:16:23 -- paths/export.sh@5 -- $ export PATH 00:01:57.580 19:16:23 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:57.580 19:16:23 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:57.580 19:16:23 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:57.580 19:16:23 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713986183.XXXXXX 00:01:57.580 19:16:23 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713986183.sQDJzi 00:01:57.580 19:16:23 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:57.580 19:16:23 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:57.580 19:16:23 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:57.580 19:16:23 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:57.580 19:16:23 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:57.580 19:16:23 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:57.580 19:16:23 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:57.580 19:16:23 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.838 19:16:23 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:57.838 19:16:23 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:57.838 19:16:23 -- pm/common@17 -- $ local monitor 00:01:57.838 19:16:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.838 19:16:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5383 00:01:57.838 19:16:23 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:57.838 19:16:23 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5385 00:01:57.838 19:16:23 -- pm/common@21 -- $ date +%s 00:01:57.838 19:16:23 -- pm/common@26 -- $ sleep 1 00:01:57.838 19:16:23 -- pm/common@21 -- $ date +%s 00:01:57.838 19:16:23 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713986183 00:01:57.838 19:16:23 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713986183 00:01:57.838 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713986183_collect-vmstat.pm.log 00:01:57.838 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713986183_collect-cpu-load.pm.log 00:01:58.774 19:16:24 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:58.774 19:16:24 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:58.774 19:16:24 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:58.774 19:16:24 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:58.774 19:16:24 -- spdk/autobuild.sh@16 -- $ date -u 00:01:58.774 Wed Apr 24 07:16:24 PM UTC 2024 00:01:58.774 19:16:24 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:58.774 v24.05-pre-413-gdd57ed3e8 00:01:58.774 19:16:24 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:58.774 19:16:24 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:58.774 19:16:24 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:58.774 19:16:24 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:58.774 19:16:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.774 ************************************ 00:01:58.774 START TEST asan 00:01:58.774 ************************************ 00:01:58.774 using asan 00:01:58.774 19:16:24 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:01:58.774 00:01:58.774 real 0m0.000s 00:01:58.774 user 0m0.000s 00:01:58.774 sys 0m0.000s 00:01:58.774 19:16:24 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:58.774 19:16:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.774 ************************************ 00:01:58.774 END TEST asan 00:01:58.774 ************************************ 00:01:59.032 19:16:24 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:59.032 19:16:24 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:59.032 19:16:24 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:59.032 19:16:24 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:59.032 19:16:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.032 ************************************ 00:01:59.032 START TEST ubsan 00:01:59.032 ************************************ 00:01:59.032 using ubsan 00:01:59.032 19:16:24 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:59.032 00:01:59.032 real 0m0.000s 00:01:59.032 user 0m0.000s 00:01:59.032 sys 0m0.000s 00:01:59.032 19:16:24 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:59.032 19:16:24 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.032 ************************************ 00:01:59.032 END TEST ubsan 00:01:59.032 ************************************ 00:01:59.032 19:16:24 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:59.032 19:16:24 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:59.032 19:16:24 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:59.032 19:16:24 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:59.032 19:16:24 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:59.032 19:16:24 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:59.032 19:16:24 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:59.032 19:16:24 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:59.032 19:16:24 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:59.032 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:59.032 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:59.621 Using 'verbs' RDMA provider 00:02:15.439 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:30.326 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:30.326 Creating mk/config.mk...done. 00:02:30.326 Creating mk/cc.flags.mk...done. 00:02:30.326 Type 'make' to build. 00:02:30.326 19:16:54 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:30.326 19:16:54 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:30.326 19:16:54 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:30.326 19:16:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.326 ************************************ 00:02:30.326 START TEST make 00:02:30.326 ************************************ 00:02:30.326 19:16:54 -- common/autotest_common.sh@1111 -- $ make -j10 00:02:30.326 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:30.326 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:30.326 meson setup builddir \ 00:02:30.326 -Dwith-libaio=enabled \ 00:02:30.326 -Dwith-liburing=enabled \ 00:02:30.326 -Dwith-libvfn=disabled \ 00:02:30.326 -Dwith-spdk=false && \ 00:02:30.326 meson compile -C builddir && \ 00:02:30.326 cd -) 00:02:30.326 make[1]: Nothing to be done for 'all'. 00:02:32.232 The Meson build system 00:02:32.232 Version: 1.3.1 00:02:32.232 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:32.232 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:32.232 Build type: native build 00:02:32.232 Project name: xnvme 00:02:32.232 Project version: 0.7.3 00:02:32.232 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:32.232 C linker for the host machine: cc ld.bfd 2.39-16 00:02:32.232 Host machine cpu family: x86_64 00:02:32.232 Host machine cpu: x86_64 00:02:32.232 Message: host_machine.system: linux 00:02:32.232 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:32.232 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:32.232 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:32.232 Run-time dependency threads found: YES 00:02:32.232 Has header "setupapi.h" : NO 00:02:32.232 Has header "linux/blkzoned.h" : YES 00:02:32.232 Has header "linux/blkzoned.h" : YES (cached) 00:02:32.232 Has header "libaio.h" : YES 00:02:32.232 Library aio found: YES 00:02:32.232 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:32.232 Run-time dependency liburing found: YES 2.2 00:02:32.232 Dependency libvfn skipped: feature with-libvfn disabled 00:02:32.232 Run-time dependency appleframeworks found: NO (tried framework) 00:02:32.232 Run-time dependency appleframeworks found: NO (tried framework) 00:02:32.232 Configuring xnvme_config.h using configuration 00:02:32.232 Configuring xnvme.spec using configuration 00:02:32.232 Run-time dependency bash-completion found: YES 2.11 00:02:32.232 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:32.232 Program cp found: YES (/usr/bin/cp) 00:02:32.232 Has header "winsock2.h" : NO 00:02:32.232 Has header "dbghelp.h" : NO 00:02:32.232 Library rpcrt4 found: NO 00:02:32.232 Library rt found: YES 00:02:32.232 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:32.232 Found CMake: /usr/bin/cmake (3.27.7) 00:02:32.232 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:32.232 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:32.232 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:32.232 Build targets in project: 32 00:02:32.232 00:02:32.232 xnvme 0.7.3 00:02:32.232 00:02:32.232 User defined options 00:02:32.232 with-libaio : enabled 00:02:32.232 with-liburing: enabled 00:02:32.232 with-libvfn : disabled 00:02:32.232 with-spdk : false 00:02:32.232 00:02:32.232 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:32.491 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:32.491 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:32.491 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:32.491 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:32.491 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:32.491 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:32.491 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:32.491 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:32.491 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:32.749 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:32.749 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:32.749 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:32.749 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:32.749 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:32.749 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:32.749 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:32.749 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:32.749 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:32.749 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:32.749 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:32.749 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:33.007 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:33.007 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:33.007 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:33.007 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:33.007 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:33.007 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:33.007 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:33.007 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:33.007 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:33.007 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:33.007 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:33.007 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:33.007 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:33.007 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:33.007 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:33.007 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:33.007 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:33.007 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:33.007 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:33.007 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:33.007 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:33.007 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:33.007 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:33.007 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:33.007 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:33.007 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:33.007 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:33.007 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:33.007 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:33.265 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:33.265 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:33.265 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:33.265 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:33.265 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:33.265 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:33.265 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:33.265 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:33.265 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:33.265 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:33.265 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:33.265 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:33.265 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:33.265 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:33.265 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:33.523 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:33.523 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:33.523 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:33.523 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:33.523 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:33.523 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:33.523 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:33.523 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:33.523 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:33.523 [74/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:33.523 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:33.523 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:33.523 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:33.781 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:33.781 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:33.781 [80/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:33.781 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:33.781 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:33.781 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:33.781 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:33.781 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:33.781 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:33.781 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:33.781 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:33.781 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:34.039 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:34.039 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:34.039 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:34.039 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:34.039 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:34.039 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:34.040 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:34.040 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:34.040 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:34.040 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:34.040 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:34.040 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:34.040 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:34.040 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:34.040 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:34.040 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:34.040 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:34.040 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:34.040 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:34.297 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:34.297 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:34.297 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:34.297 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:34.297 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:34.297 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:34.297 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:34.297 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:34.297 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:34.297 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:34.297 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:34.297 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:34.297 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:34.297 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:34.297 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:34.297 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:34.297 [125/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:34.297 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:34.297 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:34.297 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:34.297 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:34.297 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:34.297 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:34.298 [132/203] Linking target lib/libxnvme.so 00:02:34.555 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:34.555 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:34.555 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:34.555 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:34.555 [137/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:34.555 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:34.555 [139/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:34.555 [140/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:34.555 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:34.555 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:34.555 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:34.813 [144/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:34.813 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:34.813 [146/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:34.813 [147/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:34.813 [148/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:34.813 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:34.813 [150/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:34.813 [151/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:34.813 [152/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:34.813 [153/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:34.813 [154/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:35.071 [155/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:35.071 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:35.071 [157/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:35.071 [158/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:35.071 [159/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:35.071 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:35.071 [161/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:35.071 [162/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:35.329 [163/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:35.329 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:35.329 [165/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:35.329 [166/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:35.329 [167/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:35.329 [168/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:35.329 [169/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:35.329 [170/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:35.329 [171/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:35.329 [172/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:35.329 [173/203] Linking static target lib/libxnvme.a 00:02:35.329 [174/203] Linking target tests/xnvme_tests_buf 00:02:35.587 [175/203] Linking target tests/xnvme_tests_enum 00:02:35.587 [176/203] Linking target tests/xnvme_tests_async_intf 00:02:35.587 [177/203] Linking target tests/xnvme_tests_cli 00:02:35.587 [178/203] Linking target tests/xnvme_tests_xnvme_file 00:02:35.587 [179/203] Linking target tests/xnvme_tests_scc 00:02:35.587 [180/203] Linking target tests/xnvme_tests_ioworker 00:02:35.587 [181/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:35.587 [182/203] Linking target tests/xnvme_tests_znd_append 00:02:35.587 [183/203] Linking target tests/xnvme_tests_lblk 00:02:35.587 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:35.587 [185/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:35.587 [186/203] Linking target tests/xnvme_tests_znd_state 00:02:35.587 [187/203] Linking target tests/xnvme_tests_map 00:02:35.587 [188/203] Linking target tools/xdd 00:02:35.587 [189/203] Linking target tools/xnvme 00:02:35.587 [190/203] Linking target tests/xnvme_tests_kvs 00:02:35.587 [191/203] Linking target tools/zoned 00:02:35.587 [192/203] Linking target tools/kvs 00:02:35.587 [193/203] Linking target examples/xnvme_dev 00:02:35.587 [194/203] Linking target examples/xnvme_hello 00:02:35.587 [195/203] Linking target examples/xnvme_enum 00:02:35.587 [196/203] Linking target tools/lblk 00:02:35.587 [197/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:35.587 [198/203] Linking target examples/xnvme_single_sync 00:02:35.587 [199/203] Linking target examples/xnvme_single_async 00:02:35.587 [200/203] Linking target examples/zoned_io_async 00:02:35.587 [201/203] Linking target examples/xnvme_io_async 00:02:35.587 [202/203] Linking target examples/zoned_io_sync 00:02:35.587 [203/203] Linking target tools/xnvme_file 00:02:35.587 INFO: autodetecting backend as ninja 00:02:35.587 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:35.844 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:45.940 The Meson build system 00:02:45.940 Version: 1.3.1 00:02:45.940 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:45.940 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:45.940 Build type: native build 00:02:45.940 Program cat found: YES (/usr/bin/cat) 00:02:45.940 Project name: DPDK 00:02:45.940 Project version: 23.11.0 00:02:45.940 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:45.940 C linker for the host machine: cc ld.bfd 2.39-16 00:02:45.940 Host machine cpu family: x86_64 00:02:45.940 Host machine cpu: x86_64 00:02:45.940 Message: ## Building in Developer Mode ## 00:02:45.940 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:45.940 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:45.940 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:45.940 Program python3 found: YES (/usr/bin/python3) 00:02:45.940 Program cat found: YES (/usr/bin/cat) 00:02:45.940 Compiler for C supports arguments -march=native: YES 00:02:45.940 Checking for size of "void *" : 8 00:02:45.940 Checking for size of "void *" : 8 (cached) 00:02:45.940 Library m found: YES 00:02:45.940 Library numa found: YES 00:02:45.940 Has header "numaif.h" : YES 00:02:45.940 Library fdt found: NO 00:02:45.940 Library execinfo found: NO 00:02:45.940 Has header "execinfo.h" : YES 00:02:45.940 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:45.940 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:45.940 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:45.940 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:45.940 Run-time dependency openssl found: YES 3.0.9 00:02:45.940 Run-time dependency libpcap found: YES 1.10.4 00:02:45.940 Has header "pcap.h" with dependency libpcap: YES 00:02:45.940 Compiler for C supports arguments -Wcast-qual: YES 00:02:45.940 Compiler for C supports arguments -Wdeprecated: YES 00:02:45.940 Compiler for C supports arguments -Wformat: YES 00:02:45.940 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:45.940 Compiler for C supports arguments -Wformat-security: NO 00:02:45.940 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:45.940 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:45.940 Compiler for C supports arguments -Wnested-externs: YES 00:02:45.940 Compiler for C supports arguments -Wold-style-definition: YES 00:02:45.940 Compiler for C supports arguments -Wpointer-arith: YES 00:02:45.940 Compiler for C supports arguments -Wsign-compare: YES 00:02:45.940 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:45.940 Compiler for C supports arguments -Wundef: YES 00:02:45.940 Compiler for C supports arguments -Wwrite-strings: YES 00:02:45.940 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:45.940 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:45.940 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:45.940 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:45.940 Program objdump found: YES (/usr/bin/objdump) 00:02:45.940 Compiler for C supports arguments -mavx512f: YES 00:02:45.940 Checking if "AVX512 checking" compiles: YES 00:02:45.940 Fetching value of define "__SSE4_2__" : 1 00:02:45.940 Fetching value of define "__AES__" : 1 00:02:45.940 Fetching value of define "__AVX__" : 1 00:02:45.940 Fetching value of define "__AVX2__" : 1 00:02:45.940 Fetching value of define "__AVX512BW__" : 1 00:02:45.940 Fetching value of define "__AVX512CD__" : 1 00:02:45.940 Fetching value of define "__AVX512DQ__" : 1 00:02:45.940 Fetching value of define "__AVX512F__" : 1 00:02:45.940 Fetching value of define "__AVX512VL__" : 1 00:02:45.940 Fetching value of define "__PCLMUL__" : 1 00:02:45.940 Fetching value of define "__RDRND__" : 1 00:02:45.940 Fetching value of define "__RDSEED__" : 1 00:02:45.940 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:45.940 Fetching value of define "__znver1__" : (undefined) 00:02:45.940 Fetching value of define "__znver2__" : (undefined) 00:02:45.940 Fetching value of define "__znver3__" : (undefined) 00:02:45.940 Fetching value of define "__znver4__" : (undefined) 00:02:45.940 Library asan found: YES 00:02:45.940 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:45.940 Message: lib/log: Defining dependency "log" 00:02:45.940 Message: lib/kvargs: Defining dependency "kvargs" 00:02:45.940 Message: lib/telemetry: Defining dependency "telemetry" 00:02:45.940 Library rt found: YES 00:02:45.940 Checking for function "getentropy" : NO 00:02:45.940 Message: lib/eal: Defining dependency "eal" 00:02:45.940 Message: lib/ring: Defining dependency "ring" 00:02:45.940 Message: lib/rcu: Defining dependency "rcu" 00:02:45.940 Message: lib/mempool: Defining dependency "mempool" 00:02:45.940 Message: lib/mbuf: Defining dependency "mbuf" 00:02:45.940 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:45.940 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:45.940 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:45.940 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:45.940 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:45.940 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:45.940 Compiler for C supports arguments -mpclmul: YES 00:02:45.940 Compiler for C supports arguments -maes: YES 00:02:45.940 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:45.940 Compiler for C supports arguments -mavx512bw: YES 00:02:45.940 Compiler for C supports arguments -mavx512dq: YES 00:02:45.940 Compiler for C supports arguments -mavx512vl: YES 00:02:45.940 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:45.940 Compiler for C supports arguments -mavx2: YES 00:02:45.940 Compiler for C supports arguments -mavx: YES 00:02:45.940 Message: lib/net: Defining dependency "net" 00:02:45.940 Message: lib/meter: Defining dependency "meter" 00:02:45.940 Message: lib/ethdev: Defining dependency "ethdev" 00:02:45.940 Message: lib/pci: Defining dependency "pci" 00:02:45.940 Message: lib/cmdline: Defining dependency "cmdline" 00:02:45.940 Message: lib/hash: Defining dependency "hash" 00:02:45.940 Message: lib/timer: Defining dependency "timer" 00:02:45.940 Message: lib/compressdev: Defining dependency "compressdev" 00:02:45.940 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:45.940 Message: lib/dmadev: Defining dependency "dmadev" 00:02:45.940 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:45.940 Message: lib/power: Defining dependency "power" 00:02:45.940 Message: lib/reorder: Defining dependency "reorder" 00:02:45.940 Message: lib/security: Defining dependency "security" 00:02:45.940 Has header "linux/userfaultfd.h" : YES 00:02:45.940 Has header "linux/vduse.h" : YES 00:02:45.940 Message: lib/vhost: Defining dependency "vhost" 00:02:45.940 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:45.940 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:45.940 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:45.940 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:45.940 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:45.940 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:45.940 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:45.940 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:45.940 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:45.940 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:45.940 Program doxygen found: YES (/usr/bin/doxygen) 00:02:45.940 Configuring doxy-api-html.conf using configuration 00:02:45.940 Configuring doxy-api-man.conf using configuration 00:02:45.940 Program mandb found: YES (/usr/bin/mandb) 00:02:45.940 Program sphinx-build found: NO 00:02:45.940 Configuring rte_build_config.h using configuration 00:02:45.940 Message: 00:02:45.940 ================= 00:02:45.940 Applications Enabled 00:02:45.940 ================= 00:02:45.940 00:02:45.940 apps: 00:02:45.940 00:02:45.940 00:02:45.940 Message: 00:02:45.940 ================= 00:02:45.940 Libraries Enabled 00:02:45.940 ================= 00:02:45.940 00:02:45.940 libs: 00:02:45.940 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:45.940 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:45.940 cryptodev, dmadev, power, reorder, security, vhost, 00:02:45.940 00:02:45.940 Message: 00:02:45.940 =============== 00:02:45.940 Drivers Enabled 00:02:45.940 =============== 00:02:45.940 00:02:45.940 common: 00:02:45.940 00:02:45.940 bus: 00:02:45.940 pci, vdev, 00:02:45.940 mempool: 00:02:45.940 ring, 00:02:45.941 dma: 00:02:45.941 00:02:45.941 net: 00:02:45.941 00:02:45.941 crypto: 00:02:45.941 00:02:45.941 compress: 00:02:45.941 00:02:45.941 vdpa: 00:02:45.941 00:02:45.941 00:02:45.941 Message: 00:02:45.941 ================= 00:02:45.941 Content Skipped 00:02:45.941 ================= 00:02:45.941 00:02:45.941 apps: 00:02:45.941 dumpcap: explicitly disabled via build config 00:02:45.941 graph: explicitly disabled via build config 00:02:45.941 pdump: explicitly disabled via build config 00:02:45.941 proc-info: explicitly disabled via build config 00:02:45.941 test-acl: explicitly disabled via build config 00:02:45.941 test-bbdev: explicitly disabled via build config 00:02:45.941 test-cmdline: explicitly disabled via build config 00:02:45.941 test-compress-perf: explicitly disabled via build config 00:02:45.941 test-crypto-perf: explicitly disabled via build config 00:02:45.941 test-dma-perf: explicitly disabled via build config 00:02:45.941 test-eventdev: explicitly disabled via build config 00:02:45.941 test-fib: explicitly disabled via build config 00:02:45.941 test-flow-perf: explicitly disabled via build config 00:02:45.941 test-gpudev: explicitly disabled via build config 00:02:45.941 test-mldev: explicitly disabled via build config 00:02:45.941 test-pipeline: explicitly disabled via build config 00:02:45.941 test-pmd: explicitly disabled via build config 00:02:45.941 test-regex: explicitly disabled via build config 00:02:45.941 test-sad: explicitly disabled via build config 00:02:45.941 test-security-perf: explicitly disabled via build config 00:02:45.941 00:02:45.941 libs: 00:02:45.941 metrics: explicitly disabled via build config 00:02:45.941 acl: explicitly disabled via build config 00:02:45.941 bbdev: explicitly disabled via build config 00:02:45.941 bitratestats: explicitly disabled via build config 00:02:45.941 bpf: explicitly disabled via build config 00:02:45.941 cfgfile: explicitly disabled via build config 00:02:45.941 distributor: explicitly disabled via build config 00:02:45.941 efd: explicitly disabled via build config 00:02:45.941 eventdev: explicitly disabled via build config 00:02:45.941 dispatcher: explicitly disabled via build config 00:02:45.941 gpudev: explicitly disabled via build config 00:02:45.941 gro: explicitly disabled via build config 00:02:45.941 gso: explicitly disabled via build config 00:02:45.941 ip_frag: explicitly disabled via build config 00:02:45.941 jobstats: explicitly disabled via build config 00:02:45.941 latencystats: explicitly disabled via build config 00:02:45.941 lpm: explicitly disabled via build config 00:02:45.941 member: explicitly disabled via build config 00:02:45.941 pcapng: explicitly disabled via build config 00:02:45.941 rawdev: explicitly disabled via build config 00:02:45.941 regexdev: explicitly disabled via build config 00:02:45.941 mldev: explicitly disabled via build config 00:02:45.941 rib: explicitly disabled via build config 00:02:45.941 sched: explicitly disabled via build config 00:02:45.941 stack: explicitly disabled via build config 00:02:45.941 ipsec: explicitly disabled via build config 00:02:45.941 pdcp: explicitly disabled via build config 00:02:45.941 fib: explicitly disabled via build config 00:02:45.941 port: explicitly disabled via build config 00:02:45.941 pdump: explicitly disabled via build config 00:02:45.941 table: explicitly disabled via build config 00:02:45.941 pipeline: explicitly disabled via build config 00:02:45.941 graph: explicitly disabled via build config 00:02:45.941 node: explicitly disabled via build config 00:02:45.941 00:02:45.941 drivers: 00:02:45.941 common/cpt: not in enabled drivers build config 00:02:45.941 common/dpaax: not in enabled drivers build config 00:02:45.941 common/iavf: not in enabled drivers build config 00:02:45.941 common/idpf: not in enabled drivers build config 00:02:45.941 common/mvep: not in enabled drivers build config 00:02:45.941 common/octeontx: not in enabled drivers build config 00:02:45.941 bus/auxiliary: not in enabled drivers build config 00:02:45.941 bus/cdx: not in enabled drivers build config 00:02:45.941 bus/dpaa: not in enabled drivers build config 00:02:45.941 bus/fslmc: not in enabled drivers build config 00:02:45.941 bus/ifpga: not in enabled drivers build config 00:02:45.941 bus/platform: not in enabled drivers build config 00:02:45.941 bus/vmbus: not in enabled drivers build config 00:02:45.941 common/cnxk: not in enabled drivers build config 00:02:45.941 common/mlx5: not in enabled drivers build config 00:02:45.941 common/nfp: not in enabled drivers build config 00:02:45.941 common/qat: not in enabled drivers build config 00:02:45.941 common/sfc_efx: not in enabled drivers build config 00:02:45.941 mempool/bucket: not in enabled drivers build config 00:02:45.941 mempool/cnxk: not in enabled drivers build config 00:02:45.941 mempool/dpaa: not in enabled drivers build config 00:02:45.941 mempool/dpaa2: not in enabled drivers build config 00:02:45.941 mempool/octeontx: not in enabled drivers build config 00:02:45.941 mempool/stack: not in enabled drivers build config 00:02:45.941 dma/cnxk: not in enabled drivers build config 00:02:45.941 dma/dpaa: not in enabled drivers build config 00:02:45.941 dma/dpaa2: not in enabled drivers build config 00:02:45.941 dma/hisilicon: not in enabled drivers build config 00:02:45.941 dma/idxd: not in enabled drivers build config 00:02:45.941 dma/ioat: not in enabled drivers build config 00:02:45.941 dma/skeleton: not in enabled drivers build config 00:02:45.941 net/af_packet: not in enabled drivers build config 00:02:45.941 net/af_xdp: not in enabled drivers build config 00:02:45.941 net/ark: not in enabled drivers build config 00:02:45.941 net/atlantic: not in enabled drivers build config 00:02:45.941 net/avp: not in enabled drivers build config 00:02:45.941 net/axgbe: not in enabled drivers build config 00:02:45.941 net/bnx2x: not in enabled drivers build config 00:02:45.941 net/bnxt: not in enabled drivers build config 00:02:45.941 net/bonding: not in enabled drivers build config 00:02:45.941 net/cnxk: not in enabled drivers build config 00:02:45.941 net/cpfl: not in enabled drivers build config 00:02:45.941 net/cxgbe: not in enabled drivers build config 00:02:45.941 net/dpaa: not in enabled drivers build config 00:02:45.941 net/dpaa2: not in enabled drivers build config 00:02:45.941 net/e1000: not in enabled drivers build config 00:02:45.941 net/ena: not in enabled drivers build config 00:02:45.941 net/enetc: not in enabled drivers build config 00:02:45.941 net/enetfec: not in enabled drivers build config 00:02:45.941 net/enic: not in enabled drivers build config 00:02:45.941 net/failsafe: not in enabled drivers build config 00:02:45.941 net/fm10k: not in enabled drivers build config 00:02:45.941 net/gve: not in enabled drivers build config 00:02:45.941 net/hinic: not in enabled drivers build config 00:02:45.941 net/hns3: not in enabled drivers build config 00:02:45.941 net/i40e: not in enabled drivers build config 00:02:45.941 net/iavf: not in enabled drivers build config 00:02:45.941 net/ice: not in enabled drivers build config 00:02:45.941 net/idpf: not in enabled drivers build config 00:02:45.941 net/igc: not in enabled drivers build config 00:02:45.941 net/ionic: not in enabled drivers build config 00:02:45.941 net/ipn3ke: not in enabled drivers build config 00:02:45.941 net/ixgbe: not in enabled drivers build config 00:02:45.941 net/mana: not in enabled drivers build config 00:02:45.941 net/memif: not in enabled drivers build config 00:02:45.941 net/mlx4: not in enabled drivers build config 00:02:45.941 net/mlx5: not in enabled drivers build config 00:02:45.941 net/mvneta: not in enabled drivers build config 00:02:45.941 net/mvpp2: not in enabled drivers build config 00:02:45.941 net/netvsc: not in enabled drivers build config 00:02:45.941 net/nfb: not in enabled drivers build config 00:02:45.941 net/nfp: not in enabled drivers build config 00:02:45.941 net/ngbe: not in enabled drivers build config 00:02:45.941 net/null: not in enabled drivers build config 00:02:45.941 net/octeontx: not in enabled drivers build config 00:02:45.941 net/octeon_ep: not in enabled drivers build config 00:02:45.941 net/pcap: not in enabled drivers build config 00:02:45.941 net/pfe: not in enabled drivers build config 00:02:45.941 net/qede: not in enabled drivers build config 00:02:45.941 net/ring: not in enabled drivers build config 00:02:45.941 net/sfc: not in enabled drivers build config 00:02:45.941 net/softnic: not in enabled drivers build config 00:02:45.941 net/tap: not in enabled drivers build config 00:02:45.941 net/thunderx: not in enabled drivers build config 00:02:45.941 net/txgbe: not in enabled drivers build config 00:02:45.941 net/vdev_netvsc: not in enabled drivers build config 00:02:45.941 net/vhost: not in enabled drivers build config 00:02:45.941 net/virtio: not in enabled drivers build config 00:02:45.941 net/vmxnet3: not in enabled drivers build config 00:02:45.941 raw/*: missing internal dependency, "rawdev" 00:02:45.941 crypto/armv8: not in enabled drivers build config 00:02:45.941 crypto/bcmfs: not in enabled drivers build config 00:02:45.942 crypto/caam_jr: not in enabled drivers build config 00:02:45.942 crypto/ccp: not in enabled drivers build config 00:02:45.942 crypto/cnxk: not in enabled drivers build config 00:02:45.942 crypto/dpaa_sec: not in enabled drivers build config 00:02:45.942 crypto/dpaa2_sec: not in enabled drivers build config 00:02:45.942 crypto/ipsec_mb: not in enabled drivers build config 00:02:45.942 crypto/mlx5: not in enabled drivers build config 00:02:45.942 crypto/mvsam: not in enabled drivers build config 00:02:45.942 crypto/nitrox: not in enabled drivers build config 00:02:45.942 crypto/null: not in enabled drivers build config 00:02:45.942 crypto/octeontx: not in enabled drivers build config 00:02:45.942 crypto/openssl: not in enabled drivers build config 00:02:45.942 crypto/scheduler: not in enabled drivers build config 00:02:45.942 crypto/uadk: not in enabled drivers build config 00:02:45.942 crypto/virtio: not in enabled drivers build config 00:02:45.942 compress/isal: not in enabled drivers build config 00:02:45.942 compress/mlx5: not in enabled drivers build config 00:02:45.942 compress/octeontx: not in enabled drivers build config 00:02:45.942 compress/zlib: not in enabled drivers build config 00:02:45.942 regex/*: missing internal dependency, "regexdev" 00:02:45.942 ml/*: missing internal dependency, "mldev" 00:02:45.942 vdpa/ifc: not in enabled drivers build config 00:02:45.942 vdpa/mlx5: not in enabled drivers build config 00:02:45.942 vdpa/nfp: not in enabled drivers build config 00:02:45.942 vdpa/sfc: not in enabled drivers build config 00:02:45.942 event/*: missing internal dependency, "eventdev" 00:02:45.942 baseband/*: missing internal dependency, "bbdev" 00:02:45.942 gpu/*: missing internal dependency, "gpudev" 00:02:45.942 00:02:45.942 00:02:45.942 Build targets in project: 85 00:02:45.942 00:02:45.942 DPDK 23.11.0 00:02:45.942 00:02:45.942 User defined options 00:02:45.942 buildtype : debug 00:02:45.942 default_library : shared 00:02:45.942 libdir : lib 00:02:45.942 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:45.942 b_sanitize : address 00:02:45.942 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:45.942 c_link_args : 00:02:45.942 cpu_instruction_set: native 00:02:45.942 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:45.942 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:45.942 enable_docs : false 00:02:45.942 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:45.942 enable_kmods : false 00:02:45.942 tests : false 00:02:45.942 00:02:45.942 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:45.942 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:45.942 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:45.942 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:45.942 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:45.942 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:45.942 [5/265] Linking static target lib/librte_kvargs.a 00:02:45.942 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:45.942 [7/265] Linking static target lib/librte_log.a 00:02:45.942 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:45.942 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:45.942 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:45.942 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.942 [12/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:45.942 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:45.942 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:45.942 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:45.942 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:45.942 [17/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:45.942 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:45.942 [19/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:46.217 [20/265] Linking static target lib/librte_telemetry.a 00:02:46.217 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:46.217 [22/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.475 [23/265] Linking target lib/librte_log.so.24.0 00:02:46.475 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:46.475 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:46.475 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:46.475 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:46.475 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:46.734 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:46.734 [30/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:46.734 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:46.734 [32/265] Linking target lib/librte_kvargs.so.24.0 00:02:46.734 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:46.992 [34/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:46.992 [35/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.992 [36/265] Linking target lib/librte_telemetry.so.24.0 00:02:46.992 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:46.992 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:47.250 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:47.250 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:47.250 [41/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:47.250 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:47.250 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:47.250 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:47.250 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.250 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.526 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:47.790 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:47.790 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:47.790 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:47.790 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:47.790 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:47.790 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:47.790 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:48.049 [55/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:48.049 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:48.049 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:48.049 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:48.049 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:48.309 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:48.309 [61/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:48.309 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:48.309 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:48.309 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:48.309 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:48.567 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:48.567 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:48.567 [68/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:48.567 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:48.567 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:48.825 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:48.825 [72/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.825 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.825 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:48.825 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.825 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:49.083 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:49.083 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:49.083 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:49.342 [80/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:49.342 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:49.342 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:49.342 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:49.600 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:49.600 [85/265] Linking static target lib/librte_ring.a 00:02:49.600 [86/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:49.600 [87/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:49.600 [88/265] Linking static target lib/librte_eal.a 00:02:49.600 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.858 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.858 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.858 [92/265] Linking static target lib/librte_rcu.a 00:02:49.858 [93/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:49.858 [94/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:49.858 [95/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.118 [96/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:50.118 [97/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:50.118 [98/265] Linking static target lib/librte_mempool.a 00:02:50.118 [99/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.378 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:50.378 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:50.378 [102/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:50.378 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:50.378 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:50.638 [105/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:50.638 [106/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:50.638 [107/265] Linking static target lib/librte_net.a 00:02:50.638 [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:50.638 [109/265] Linking static target lib/librte_meter.a 00:02:50.896 [110/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.896 [111/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:50.896 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:50.896 [113/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.896 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:51.154 [115/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.154 [116/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:51.154 [117/265] Linking static target lib/librte_mbuf.a 00:02:51.413 [118/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.413 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:51.673 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:51.931 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:51.931 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:51.931 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:51.931 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.931 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:52.190 [126/265] Linking static target lib/librte_pci.a 00:02:52.190 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:52.190 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:52.190 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:52.190 [130/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.190 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:52.448 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:52.448 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:52.448 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:52.448 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:52.448 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:52.448 [137/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.449 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:52.449 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:52.449 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:52.706 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:52.706 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:52.706 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:52.706 [144/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:52.706 [145/265] Linking static target lib/librte_cmdline.a 00:02:52.706 [146/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:52.964 [147/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:52.964 [148/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:53.222 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:53.222 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:53.222 [151/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:53.222 [152/265] Linking static target lib/librte_timer.a 00:02:53.480 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:53.738 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:53.738 [155/265] Linking static target lib/librte_compressdev.a 00:02:53.738 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:53.997 [157/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:53.997 [158/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:53.997 [159/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:53.997 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.997 [161/265] Linking static target lib/librte_ethdev.a 00:02:54.255 [162/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:54.255 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:54.255 [164/265] Linking static target lib/librte_dmadev.a 00:02:54.255 [165/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:54.512 [166/265] Linking static target lib/librte_hash.a 00:02:54.512 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:54.512 [168/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.512 [169/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:54.512 [170/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:54.770 [171/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:54.770 [172/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.027 [173/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:55.027 [174/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:55.027 [175/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.028 [176/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:55.028 [177/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:55.028 [178/265] Linking static target lib/librte_cryptodev.a 00:02:55.028 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:55.285 [180/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:55.285 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:55.285 [182/265] Linking static target lib/librte_power.a 00:02:55.542 [183/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.542 [184/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:55.542 [185/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:55.800 [186/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:55.800 [187/265] Linking static target lib/librte_reorder.a 00:02:55.800 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:55.800 [189/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:55.800 [190/265] Linking static target lib/librte_security.a 00:02:56.376 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.376 [192/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.376 [193/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:56.376 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.376 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:56.633 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:56.633 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:56.891 [198/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:56.891 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:57.150 [200/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:57.150 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:57.150 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:57.150 [203/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.150 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:57.408 [205/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:57.408 [206/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:57.408 [207/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:57.408 [208/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:57.666 [209/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:57.666 [210/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.666 [211/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:57.666 [212/265] Linking static target drivers/librte_bus_vdev.a 00:02:57.666 [213/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:57.666 [214/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.666 [215/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:57.666 [216/265] Linking static target drivers/librte_bus_pci.a 00:02:57.666 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.666 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.924 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.924 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:57.924 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.924 [222/265] Linking static target drivers/librte_mempool_ring.a 00:02:57.924 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:58.183 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.556 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:00.124 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.383 [227/265] Linking target lib/librte_eal.so.24.0 00:03:00.383 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:00.383 [229/265] Linking target lib/librte_meter.so.24.0 00:03:00.383 [230/265] Linking target lib/librte_ring.so.24.0 00:03:00.383 [231/265] Linking target lib/librte_pci.so.24.0 00:03:00.383 [232/265] Linking target lib/librte_dmadev.so.24.0 00:03:00.383 [233/265] Linking target lib/librte_timer.so.24.0 00:03:00.383 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:03:00.642 [235/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:00.642 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:00.642 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:00.642 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:00.642 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:00.642 [240/265] Linking target lib/librte_rcu.so.24.0 00:03:00.642 [241/265] Linking target lib/librte_mempool.so.24.0 00:03:00.642 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:03:00.902 [243/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:00.902 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:00.902 [245/265] Linking target drivers/librte_mempool_ring.so.24.0 00:03:00.902 [246/265] Linking target lib/librte_mbuf.so.24.0 00:03:01.162 [247/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:01.162 [248/265] Linking target lib/librte_reorder.so.24.0 00:03:01.162 [249/265] Linking target lib/librte_compressdev.so.24.0 00:03:01.162 [250/265] Linking target lib/librte_net.so.24.0 00:03:01.162 [251/265] Linking target lib/librte_cryptodev.so.24.0 00:03:01.162 [252/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:01.162 [253/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:01.162 [254/265] Linking target lib/librte_hash.so.24.0 00:03:01.422 [255/265] Linking target lib/librte_cmdline.so.24.0 00:03:01.422 [256/265] Linking target lib/librte_security.so.24.0 00:03:01.422 [257/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:02.361 [258/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.361 [259/265] Linking target lib/librte_ethdev.so.24.0 00:03:02.361 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:02.621 [261/265] Linking target lib/librte_power.so.24.0 00:03:03.557 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:03.557 [263/265] Linking static target lib/librte_vhost.a 00:03:06.095 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.095 [265/265] Linking target lib/librte_vhost.so.24.0 00:03:06.095 INFO: autodetecting backend as ninja 00:03:06.095 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:07.033 CC lib/log/log.o 00:03:07.034 CC lib/log/log_flags.o 00:03:07.034 CC lib/log/log_deprecated.o 00:03:07.034 CC lib/ut_mock/mock.o 00:03:07.034 CC lib/ut/ut.o 00:03:07.034 LIB libspdk_ut_mock.a 00:03:07.034 LIB libspdk_log.a 00:03:07.292 SO libspdk_ut_mock.so.6.0 00:03:07.292 LIB libspdk_ut.a 00:03:07.292 SO libspdk_log.so.7.0 00:03:07.292 SYMLINK libspdk_ut_mock.so 00:03:07.292 SO libspdk_ut.so.2.0 00:03:07.292 SYMLINK libspdk_log.so 00:03:07.292 SYMLINK libspdk_ut.so 00:03:07.551 CC lib/ioat/ioat.o 00:03:07.551 CC lib/dma/dma.o 00:03:07.551 CC lib/util/bit_array.o 00:03:07.551 CC lib/util/base64.o 00:03:07.551 CC lib/util/cpuset.o 00:03:07.551 CC lib/util/crc16.o 00:03:07.551 CC lib/util/crc32.o 00:03:07.551 CC lib/util/crc32c.o 00:03:07.551 CXX lib/trace_parser/trace.o 00:03:07.551 CC lib/vfio_user/host/vfio_user_pci.o 00:03:07.809 CC lib/vfio_user/host/vfio_user.o 00:03:07.809 CC lib/util/crc32_ieee.o 00:03:07.809 CC lib/util/crc64.o 00:03:07.809 CC lib/util/dif.o 00:03:07.809 LIB libspdk_dma.a 00:03:07.809 CC lib/util/fd.o 00:03:07.809 SO libspdk_dma.so.4.0 00:03:07.809 CC lib/util/file.o 00:03:07.809 CC lib/util/hexlify.o 00:03:07.809 CC lib/util/iov.o 00:03:07.809 SYMLINK libspdk_dma.so 00:03:07.809 CC lib/util/math.o 00:03:07.809 LIB libspdk_ioat.a 00:03:07.809 SO libspdk_ioat.so.7.0 00:03:07.809 CC lib/util/pipe.o 00:03:07.809 CC lib/util/strerror_tls.o 00:03:08.067 LIB libspdk_vfio_user.a 00:03:08.067 SYMLINK libspdk_ioat.so 00:03:08.067 CC lib/util/string.o 00:03:08.067 CC lib/util/uuid.o 00:03:08.067 CC lib/util/fd_group.o 00:03:08.067 SO libspdk_vfio_user.so.5.0 00:03:08.067 CC lib/util/xor.o 00:03:08.067 CC lib/util/zipf.o 00:03:08.067 SYMLINK libspdk_vfio_user.so 00:03:08.326 LIB libspdk_util.a 00:03:08.584 SO libspdk_util.so.9.0 00:03:08.584 LIB libspdk_trace_parser.a 00:03:08.584 SYMLINK libspdk_util.so 00:03:08.584 SO libspdk_trace_parser.so.5.0 00:03:08.843 SYMLINK libspdk_trace_parser.so 00:03:08.843 CC lib/env_dpdk/env.o 00:03:08.843 CC lib/env_dpdk/memory.o 00:03:08.843 CC lib/env_dpdk/pci.o 00:03:08.843 CC lib/env_dpdk/init.o 00:03:08.843 CC lib/env_dpdk/threads.o 00:03:08.843 CC lib/vmd/vmd.o 00:03:08.843 CC lib/json/json_parse.o 00:03:08.843 CC lib/idxd/idxd.o 00:03:08.843 CC lib/conf/conf.o 00:03:08.843 CC lib/rdma/common.o 00:03:08.843 CC lib/env_dpdk/pci_ioat.o 00:03:09.102 CC lib/json/json_util.o 00:03:09.102 CC lib/idxd/idxd_user.o 00:03:09.102 LIB libspdk_conf.a 00:03:09.102 SO libspdk_conf.so.6.0 00:03:09.102 SYMLINK libspdk_conf.so 00:03:09.102 CC lib/vmd/led.o 00:03:09.102 CC lib/rdma/rdma_verbs.o 00:03:09.102 CC lib/env_dpdk/pci_virtio.o 00:03:09.102 CC lib/json/json_write.o 00:03:09.362 CC lib/env_dpdk/pci_vmd.o 00:03:09.362 CC lib/env_dpdk/pci_idxd.o 00:03:09.362 CC lib/env_dpdk/pci_event.o 00:03:09.362 CC lib/env_dpdk/sigbus_handler.o 00:03:09.362 LIB libspdk_rdma.a 00:03:09.362 CC lib/env_dpdk/pci_dpdk.o 00:03:09.362 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:09.362 SO libspdk_rdma.so.6.0 00:03:09.362 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:09.362 LIB libspdk_idxd.a 00:03:09.621 SYMLINK libspdk_rdma.so 00:03:09.621 SO libspdk_idxd.so.12.0 00:03:09.621 LIB libspdk_json.a 00:03:09.621 SO libspdk_json.so.6.0 00:03:09.621 LIB libspdk_vmd.a 00:03:09.621 SYMLINK libspdk_idxd.so 00:03:09.621 SO libspdk_vmd.so.6.0 00:03:09.621 SYMLINK libspdk_json.so 00:03:09.621 SYMLINK libspdk_vmd.so 00:03:09.881 CC lib/jsonrpc/jsonrpc_server.o 00:03:09.881 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:09.881 CC lib/jsonrpc/jsonrpc_client.o 00:03:09.881 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:10.450 LIB libspdk_jsonrpc.a 00:03:10.450 SO libspdk_jsonrpc.so.6.0 00:03:10.450 SYMLINK libspdk_jsonrpc.so 00:03:10.450 LIB libspdk_env_dpdk.a 00:03:10.709 SO libspdk_env_dpdk.so.14.0 00:03:10.709 CC lib/rpc/rpc.o 00:03:10.709 SYMLINK libspdk_env_dpdk.so 00:03:10.969 LIB libspdk_rpc.a 00:03:10.969 SO libspdk_rpc.so.6.0 00:03:11.227 SYMLINK libspdk_rpc.so 00:03:11.486 CC lib/trace/trace.o 00:03:11.486 CC lib/trace/trace_rpc.o 00:03:11.486 CC lib/trace/trace_flags.o 00:03:11.486 CC lib/notify/notify.o 00:03:11.486 CC lib/notify/notify_rpc.o 00:03:11.486 CC lib/keyring/keyring_rpc.o 00:03:11.486 CC lib/keyring/keyring.o 00:03:11.744 LIB libspdk_notify.a 00:03:11.744 SO libspdk_notify.so.6.0 00:03:11.744 LIB libspdk_trace.a 00:03:11.744 LIB libspdk_keyring.a 00:03:11.744 SO libspdk_trace.so.10.0 00:03:11.744 SO libspdk_keyring.so.1.0 00:03:11.744 SYMLINK libspdk_notify.so 00:03:11.744 SYMLINK libspdk_keyring.so 00:03:11.744 SYMLINK libspdk_trace.so 00:03:12.312 CC lib/sock/sock.o 00:03:12.312 CC lib/sock/sock_rpc.o 00:03:12.312 CC lib/thread/thread.o 00:03:12.312 CC lib/thread/iobuf.o 00:03:12.572 LIB libspdk_sock.a 00:03:12.572 SO libspdk_sock.so.9.0 00:03:12.832 SYMLINK libspdk_sock.so 00:03:13.091 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:13.091 CC lib/nvme/nvme_ns_cmd.o 00:03:13.091 CC lib/nvme/nvme_ctrlr.o 00:03:13.091 CC lib/nvme/nvme_fabric.o 00:03:13.091 CC lib/nvme/nvme_pcie_common.o 00:03:13.091 CC lib/nvme/nvme_ns.o 00:03:13.091 CC lib/nvme/nvme_pcie.o 00:03:13.091 CC lib/nvme/nvme_qpair.o 00:03:13.091 CC lib/nvme/nvme.o 00:03:14.039 LIB libspdk_thread.a 00:03:14.040 CC lib/nvme/nvme_quirks.o 00:03:14.040 SO libspdk_thread.so.10.0 00:03:14.040 CC lib/nvme/nvme_transport.o 00:03:14.040 CC lib/nvme/nvme_discovery.o 00:03:14.040 SYMLINK libspdk_thread.so 00:03:14.040 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:14.040 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:14.040 CC lib/nvme/nvme_tcp.o 00:03:14.040 CC lib/nvme/nvme_opal.o 00:03:14.040 CC lib/nvme/nvme_io_msg.o 00:03:14.305 CC lib/nvme/nvme_poll_group.o 00:03:14.305 CC lib/nvme/nvme_zns.o 00:03:14.564 CC lib/nvme/nvme_stubs.o 00:03:14.564 CC lib/nvme/nvme_auth.o 00:03:14.564 CC lib/nvme/nvme_cuse.o 00:03:14.564 CC lib/nvme/nvme_rdma.o 00:03:14.822 CC lib/accel/accel.o 00:03:14.822 CC lib/blob/blobstore.o 00:03:14.822 CC lib/blob/request.o 00:03:14.822 CC lib/blob/zeroes.o 00:03:14.822 CC lib/blob/blob_bs_dev.o 00:03:15.081 CC lib/accel/accel_rpc.o 00:03:15.339 CC lib/init/json_config.o 00:03:15.339 CC lib/virtio/virtio.o 00:03:15.339 CC lib/virtio/virtio_vhost_user.o 00:03:15.339 CC lib/virtio/virtio_vfio_user.o 00:03:15.339 CC lib/virtio/virtio_pci.o 00:03:15.339 CC lib/init/subsystem.o 00:03:15.597 CC lib/init/subsystem_rpc.o 00:03:15.597 CC lib/init/rpc.o 00:03:15.597 CC lib/accel/accel_sw.o 00:03:15.855 LIB libspdk_virtio.a 00:03:15.855 LIB libspdk_init.a 00:03:15.855 SO libspdk_virtio.so.7.0 00:03:15.855 SO libspdk_init.so.5.0 00:03:15.855 SYMLINK libspdk_init.so 00:03:15.855 SYMLINK libspdk_virtio.so 00:03:16.114 LIB libspdk_accel.a 00:03:16.114 SO libspdk_accel.so.15.0 00:03:16.114 LIB libspdk_nvme.a 00:03:16.114 SYMLINK libspdk_accel.so 00:03:16.373 CC lib/event/app.o 00:03:16.373 CC lib/event/reactor.o 00:03:16.373 CC lib/event/app_rpc.o 00:03:16.373 CC lib/event/scheduler_static.o 00:03:16.373 CC lib/event/log_rpc.o 00:03:16.373 SO libspdk_nvme.so.13.0 00:03:16.373 CC lib/bdev/bdev.o 00:03:16.373 CC lib/bdev/bdev_rpc.o 00:03:16.373 CC lib/bdev/scsi_nvme.o 00:03:16.373 CC lib/bdev/bdev_zone.o 00:03:16.373 CC lib/bdev/part.o 00:03:16.631 SYMLINK libspdk_nvme.so 00:03:16.978 LIB libspdk_event.a 00:03:16.978 SO libspdk_event.so.13.0 00:03:16.978 SYMLINK libspdk_event.so 00:03:18.391 LIB libspdk_blob.a 00:03:18.391 SO libspdk_blob.so.11.0 00:03:18.391 SYMLINK libspdk_blob.so 00:03:18.651 CC lib/blobfs/blobfs.o 00:03:18.651 CC lib/blobfs/tree.o 00:03:18.651 CC lib/lvol/lvol.o 00:03:19.219 LIB libspdk_bdev.a 00:03:19.219 SO libspdk_bdev.so.15.0 00:03:19.478 SYMLINK libspdk_bdev.so 00:03:19.478 LIB libspdk_blobfs.a 00:03:19.736 CC lib/ublk/ublk.o 00:03:19.736 CC lib/ublk/ublk_rpc.o 00:03:19.736 CC lib/scsi/dev.o 00:03:19.736 CC lib/scsi/lun.o 00:03:19.736 CC lib/scsi/port.o 00:03:19.736 CC lib/nvmf/ctrlr.o 00:03:19.736 CC lib/nbd/nbd.o 00:03:19.736 CC lib/ftl/ftl_core.o 00:03:19.736 SO libspdk_blobfs.so.10.0 00:03:19.736 LIB libspdk_lvol.a 00:03:19.736 SYMLINK libspdk_blobfs.so 00:03:19.736 CC lib/nvmf/ctrlr_discovery.o 00:03:19.736 SO libspdk_lvol.so.10.0 00:03:19.736 CC lib/scsi/scsi.o 00:03:19.736 CC lib/scsi/scsi_bdev.o 00:03:19.736 SYMLINK libspdk_lvol.so 00:03:19.736 CC lib/scsi/scsi_pr.o 00:03:19.736 CC lib/nvmf/ctrlr_bdev.o 00:03:19.995 CC lib/nvmf/subsystem.o 00:03:19.995 CC lib/scsi/scsi_rpc.o 00:03:19.995 CC lib/ftl/ftl_init.o 00:03:19.995 CC lib/nbd/nbd_rpc.o 00:03:19.995 CC lib/scsi/task.o 00:03:20.254 CC lib/nvmf/nvmf.o 00:03:20.254 CC lib/ftl/ftl_layout.o 00:03:20.254 LIB libspdk_nbd.a 00:03:20.254 SO libspdk_nbd.so.7.0 00:03:20.254 CC lib/ftl/ftl_debug.o 00:03:20.254 CC lib/ftl/ftl_io.o 00:03:20.254 SYMLINK libspdk_nbd.so 00:03:20.254 CC lib/nvmf/nvmf_rpc.o 00:03:20.254 LIB libspdk_scsi.a 00:03:20.514 LIB libspdk_ublk.a 00:03:20.514 SO libspdk_scsi.so.9.0 00:03:20.514 SO libspdk_ublk.so.3.0 00:03:20.514 CC lib/ftl/ftl_sb.o 00:03:20.514 SYMLINK libspdk_scsi.so 00:03:20.514 SYMLINK libspdk_ublk.so 00:03:20.514 CC lib/nvmf/transport.o 00:03:20.514 CC lib/nvmf/tcp.o 00:03:20.514 CC lib/ftl/ftl_l2p.o 00:03:20.514 CC lib/nvmf/rdma.o 00:03:20.774 CC lib/ftl/ftl_l2p_flat.o 00:03:20.774 CC lib/ftl/ftl_nv_cache.o 00:03:20.774 CC lib/ftl/ftl_band.o 00:03:21.034 CC lib/iscsi/conn.o 00:03:21.034 CC lib/ftl/ftl_band_ops.o 00:03:21.295 CC lib/iscsi/init_grp.o 00:03:21.295 CC lib/iscsi/iscsi.o 00:03:21.295 CC lib/ftl/ftl_writer.o 00:03:21.295 CC lib/vhost/vhost.o 00:03:21.609 CC lib/ftl/ftl_rq.o 00:03:21.609 CC lib/iscsi/md5.o 00:03:21.609 CC lib/ftl/ftl_reloc.o 00:03:21.609 CC lib/ftl/ftl_l2p_cache.o 00:03:21.609 CC lib/iscsi/param.o 00:03:21.609 CC lib/ftl/ftl_p2l.o 00:03:21.871 CC lib/ftl/mngt/ftl_mngt.o 00:03:21.871 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:21.871 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:22.132 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:22.132 CC lib/vhost/vhost_rpc.o 00:03:22.132 CC lib/iscsi/portal_grp.o 00:03:22.132 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:22.132 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:22.132 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:22.132 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:22.393 CC lib/iscsi/tgt_node.o 00:03:22.393 CC lib/iscsi/iscsi_subsystem.o 00:03:22.393 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:22.393 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:22.393 CC lib/vhost/vhost_scsi.o 00:03:22.393 CC lib/vhost/vhost_blk.o 00:03:22.393 CC lib/vhost/rte_vhost_user.o 00:03:22.653 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:22.653 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:22.653 CC lib/iscsi/iscsi_rpc.o 00:03:22.653 CC lib/iscsi/task.o 00:03:22.653 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:22.913 CC lib/ftl/utils/ftl_conf.o 00:03:22.913 CC lib/ftl/utils/ftl_md.o 00:03:22.913 CC lib/ftl/utils/ftl_mempool.o 00:03:22.913 CC lib/ftl/utils/ftl_bitmap.o 00:03:22.913 CC lib/ftl/utils/ftl_property.o 00:03:23.171 LIB libspdk_iscsi.a 00:03:23.172 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:23.172 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:23.172 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:23.172 SO libspdk_iscsi.so.8.0 00:03:23.172 LIB libspdk_nvmf.a 00:03:23.432 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:23.432 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:23.432 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:23.432 SYMLINK libspdk_iscsi.so 00:03:23.432 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:23.432 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:23.432 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:23.432 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:23.432 SO libspdk_nvmf.so.18.0 00:03:23.432 CC lib/ftl/base/ftl_base_dev.o 00:03:23.432 CC lib/ftl/base/ftl_base_bdev.o 00:03:23.432 CC lib/ftl/ftl_trace.o 00:03:23.432 LIB libspdk_vhost.a 00:03:23.691 SYMLINK libspdk_nvmf.so 00:03:23.691 SO libspdk_vhost.so.8.0 00:03:23.691 SYMLINK libspdk_vhost.so 00:03:23.691 LIB libspdk_ftl.a 00:03:23.949 SO libspdk_ftl.so.9.0 00:03:24.208 SYMLINK libspdk_ftl.so 00:03:24.782 CC module/env_dpdk/env_dpdk_rpc.o 00:03:24.782 CC module/accel/iaa/accel_iaa.o 00:03:24.782 CC module/sock/posix/posix.o 00:03:24.782 CC module/accel/ioat/accel_ioat.o 00:03:24.782 CC module/accel/error/accel_error.o 00:03:24.782 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:24.782 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:24.782 CC module/keyring/file/keyring.o 00:03:24.782 CC module/blob/bdev/blob_bdev.o 00:03:24.782 CC module/accel/dsa/accel_dsa.o 00:03:24.782 LIB libspdk_env_dpdk_rpc.a 00:03:24.782 SO libspdk_env_dpdk_rpc.so.6.0 00:03:24.782 CC module/keyring/file/keyring_rpc.o 00:03:24.782 LIB libspdk_scheduler_dpdk_governor.a 00:03:24.782 CC module/accel/error/accel_error_rpc.o 00:03:24.782 SYMLINK libspdk_env_dpdk_rpc.so 00:03:24.782 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:24.782 LIB libspdk_scheduler_dynamic.a 00:03:24.782 CC module/accel/ioat/accel_ioat_rpc.o 00:03:25.040 CC module/accel/iaa/accel_iaa_rpc.o 00:03:25.040 SO libspdk_scheduler_dynamic.so.4.0 00:03:25.040 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:25.040 CC module/accel/dsa/accel_dsa_rpc.o 00:03:25.040 SYMLINK libspdk_scheduler_dynamic.so 00:03:25.040 LIB libspdk_blob_bdev.a 00:03:25.040 LIB libspdk_keyring_file.a 00:03:25.040 SO libspdk_blob_bdev.so.11.0 00:03:25.040 CC module/scheduler/gscheduler/gscheduler.o 00:03:25.040 LIB libspdk_accel_error.a 00:03:25.040 LIB libspdk_accel_ioat.a 00:03:25.040 SO libspdk_keyring_file.so.1.0 00:03:25.040 LIB libspdk_accel_iaa.a 00:03:25.040 SO libspdk_accel_error.so.2.0 00:03:25.040 SO libspdk_accel_ioat.so.6.0 00:03:25.040 SYMLINK libspdk_blob_bdev.so 00:03:25.040 SO libspdk_accel_iaa.so.3.0 00:03:25.040 LIB libspdk_accel_dsa.a 00:03:25.040 SYMLINK libspdk_keyring_file.so 00:03:25.040 SYMLINK libspdk_accel_error.so 00:03:25.040 SO libspdk_accel_dsa.so.5.0 00:03:25.040 SYMLINK libspdk_accel_ioat.so 00:03:25.040 SYMLINK libspdk_accel_iaa.so 00:03:25.299 LIB libspdk_scheduler_gscheduler.a 00:03:25.299 SYMLINK libspdk_accel_dsa.so 00:03:25.299 SO libspdk_scheduler_gscheduler.so.4.0 00:03:25.299 SYMLINK libspdk_scheduler_gscheduler.so 00:03:25.299 CC module/bdev/error/vbdev_error.o 00:03:25.299 CC module/bdev/malloc/bdev_malloc.o 00:03:25.299 CC module/bdev/lvol/vbdev_lvol.o 00:03:25.299 CC module/bdev/gpt/gpt.o 00:03:25.299 CC module/bdev/delay/vbdev_delay.o 00:03:25.299 CC module/bdev/null/bdev_null.o 00:03:25.299 CC module/blobfs/bdev/blobfs_bdev.o 00:03:25.299 CC module/bdev/nvme/bdev_nvme.o 00:03:25.558 CC module/bdev/passthru/vbdev_passthru.o 00:03:25.558 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:25.558 CC module/bdev/gpt/vbdev_gpt.o 00:03:25.558 LIB libspdk_sock_posix.a 00:03:25.558 SO libspdk_sock_posix.so.6.0 00:03:25.558 CC module/bdev/error/vbdev_error_rpc.o 00:03:25.558 CC module/bdev/null/bdev_null_rpc.o 00:03:25.818 SYMLINK libspdk_sock_posix.so 00:03:25.818 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:25.818 LIB libspdk_blobfs_bdev.a 00:03:25.818 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:25.818 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:25.818 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:25.818 SO libspdk_blobfs_bdev.so.6.0 00:03:25.818 LIB libspdk_bdev_error.a 00:03:25.818 LIB libspdk_bdev_null.a 00:03:25.818 LIB libspdk_bdev_gpt.a 00:03:25.818 SO libspdk_bdev_error.so.6.0 00:03:25.818 SYMLINK libspdk_blobfs_bdev.so 00:03:25.818 SO libspdk_bdev_null.so.6.0 00:03:25.818 SO libspdk_bdev_gpt.so.6.0 00:03:25.818 CC module/bdev/nvme/nvme_rpc.o 00:03:25.818 LIB libspdk_bdev_passthru.a 00:03:25.818 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:25.818 SYMLINK libspdk_bdev_error.so 00:03:25.818 SYMLINK libspdk_bdev_null.so 00:03:25.818 SO libspdk_bdev_passthru.so.6.0 00:03:25.818 LIB libspdk_bdev_delay.a 00:03:25.818 LIB libspdk_bdev_malloc.a 00:03:25.818 SYMLINK libspdk_bdev_gpt.so 00:03:25.818 CC module/bdev/nvme/bdev_mdns_client.o 00:03:25.818 CC module/bdev/nvme/vbdev_opal.o 00:03:25.818 SO libspdk_bdev_delay.so.6.0 00:03:25.818 SO libspdk_bdev_malloc.so.6.0 00:03:26.077 SYMLINK libspdk_bdev_passthru.so 00:03:26.077 SYMLINK libspdk_bdev_malloc.so 00:03:26.077 SYMLINK libspdk_bdev_delay.so 00:03:26.077 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:26.077 CC module/bdev/raid/bdev_raid.o 00:03:26.077 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:26.077 CC module/bdev/split/vbdev_split.o 00:03:26.077 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:26.077 CC module/bdev/xnvme/bdev_xnvme.o 00:03:26.335 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:26.335 LIB libspdk_bdev_lvol.a 00:03:26.335 CC module/bdev/raid/bdev_raid_rpc.o 00:03:26.335 SO libspdk_bdev_lvol.so.6.0 00:03:26.335 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:26.335 CC module/bdev/split/vbdev_split_rpc.o 00:03:26.335 CC module/bdev/raid/bdev_raid_sb.o 00:03:26.335 SYMLINK libspdk_bdev_lvol.so 00:03:26.594 CC module/bdev/raid/raid0.o 00:03:26.594 LIB libspdk_bdev_xnvme.a 00:03:26.594 CC module/bdev/raid/raid1.o 00:03:26.594 SO libspdk_bdev_xnvme.so.3.0 00:03:26.594 LIB libspdk_bdev_zone_block.a 00:03:26.594 LIB libspdk_bdev_split.a 00:03:26.594 SO libspdk_bdev_zone_block.so.6.0 00:03:26.594 SO libspdk_bdev_split.so.6.0 00:03:26.594 CC module/bdev/aio/bdev_aio.o 00:03:26.594 SYMLINK libspdk_bdev_xnvme.so 00:03:26.594 CC module/bdev/aio/bdev_aio_rpc.o 00:03:26.594 SYMLINK libspdk_bdev_split.so 00:03:26.594 CC module/bdev/ftl/bdev_ftl.o 00:03:26.594 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:26.594 SYMLINK libspdk_bdev_zone_block.so 00:03:26.594 CC module/bdev/raid/concat.o 00:03:26.852 CC module/bdev/iscsi/bdev_iscsi.o 00:03:26.852 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:26.852 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:26.852 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:26.852 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:26.852 LIB libspdk_bdev_ftl.a 00:03:26.852 LIB libspdk_bdev_aio.a 00:03:27.110 SO libspdk_bdev_ftl.so.6.0 00:03:27.110 SO libspdk_bdev_aio.so.6.0 00:03:27.110 SYMLINK libspdk_bdev_ftl.so 00:03:27.110 SYMLINK libspdk_bdev_aio.so 00:03:27.110 LIB libspdk_bdev_raid.a 00:03:27.369 LIB libspdk_bdev_iscsi.a 00:03:27.369 SO libspdk_bdev_raid.so.6.0 00:03:27.369 SO libspdk_bdev_iscsi.so.6.0 00:03:27.369 SYMLINK libspdk_bdev_iscsi.so 00:03:27.369 SYMLINK libspdk_bdev_raid.so 00:03:27.628 LIB libspdk_bdev_virtio.a 00:03:27.628 SO libspdk_bdev_virtio.so.6.0 00:03:27.628 SYMLINK libspdk_bdev_virtio.so 00:03:27.886 LIB libspdk_bdev_nvme.a 00:03:27.886 SO libspdk_bdev_nvme.so.7.0 00:03:28.144 SYMLINK libspdk_bdev_nvme.so 00:03:28.714 CC module/event/subsystems/scheduler/scheduler.o 00:03:28.714 CC module/event/subsystems/vmd/vmd.o 00:03:28.714 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:28.714 CC module/event/subsystems/iobuf/iobuf.o 00:03:28.714 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:28.714 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:28.714 CC module/event/subsystems/sock/sock.o 00:03:28.714 CC module/event/subsystems/keyring/keyring.o 00:03:28.973 LIB libspdk_event_scheduler.a 00:03:28.973 LIB libspdk_event_sock.a 00:03:28.973 LIB libspdk_event_vmd.a 00:03:28.973 LIB libspdk_event_keyring.a 00:03:28.973 LIB libspdk_event_vhost_blk.a 00:03:28.973 SO libspdk_event_sock.so.5.0 00:03:28.973 SO libspdk_event_scheduler.so.4.0 00:03:28.973 LIB libspdk_event_iobuf.a 00:03:28.973 SO libspdk_event_vmd.so.6.0 00:03:28.973 SO libspdk_event_keyring.so.1.0 00:03:28.973 SO libspdk_event_vhost_blk.so.3.0 00:03:28.973 SO libspdk_event_iobuf.so.3.0 00:03:28.973 SYMLINK libspdk_event_scheduler.so 00:03:28.973 SYMLINK libspdk_event_sock.so 00:03:28.973 SYMLINK libspdk_event_keyring.so 00:03:28.973 SYMLINK libspdk_event_vmd.so 00:03:28.973 SYMLINK libspdk_event_vhost_blk.so 00:03:28.973 SYMLINK libspdk_event_iobuf.so 00:03:29.232 CC module/event/subsystems/accel/accel.o 00:03:29.491 LIB libspdk_event_accel.a 00:03:29.491 SO libspdk_event_accel.so.6.0 00:03:29.750 SYMLINK libspdk_event_accel.so 00:03:30.009 CC module/event/subsystems/bdev/bdev.o 00:03:30.271 LIB libspdk_event_bdev.a 00:03:30.271 SO libspdk_event_bdev.so.6.0 00:03:30.271 SYMLINK libspdk_event_bdev.so 00:03:30.534 CC module/event/subsystems/scsi/scsi.o 00:03:30.534 CC module/event/subsystems/nbd/nbd.o 00:03:30.534 CC module/event/subsystems/ublk/ublk.o 00:03:30.534 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:30.534 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:30.793 LIB libspdk_event_nbd.a 00:03:30.793 LIB libspdk_event_ublk.a 00:03:30.793 LIB libspdk_event_scsi.a 00:03:30.793 SO libspdk_event_nbd.so.6.0 00:03:30.793 SO libspdk_event_ublk.so.3.0 00:03:30.793 SO libspdk_event_scsi.so.6.0 00:03:30.793 LIB libspdk_event_nvmf.a 00:03:30.793 SYMLINK libspdk_event_nbd.so 00:03:30.793 SYMLINK libspdk_event_scsi.so 00:03:30.793 SYMLINK libspdk_event_ublk.so 00:03:30.793 SO libspdk_event_nvmf.so.6.0 00:03:31.052 SYMLINK libspdk_event_nvmf.so 00:03:31.052 CC module/event/subsystems/iscsi/iscsi.o 00:03:31.052 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:31.312 LIB libspdk_event_vhost_scsi.a 00:03:31.312 LIB libspdk_event_iscsi.a 00:03:31.312 SO libspdk_event_vhost_scsi.so.3.0 00:03:31.312 SO libspdk_event_iscsi.so.6.0 00:03:31.312 SYMLINK libspdk_event_vhost_scsi.so 00:03:31.572 SYMLINK libspdk_event_iscsi.so 00:03:31.572 SO libspdk.so.6.0 00:03:31.572 SYMLINK libspdk.so 00:03:32.141 CXX app/trace/trace.o 00:03:32.141 CC examples/vmd/lsvmd/lsvmd.o 00:03:32.141 CC examples/ioat/perf/perf.o 00:03:32.141 CC examples/accel/perf/accel_perf.o 00:03:32.141 CC examples/nvme/hello_world/hello_world.o 00:03:32.141 CC examples/sock/hello_world/hello_sock.o 00:03:32.141 CC examples/blob/hello_world/hello_blob.o 00:03:32.141 CC test/accel/dif/dif.o 00:03:32.141 CC examples/nvmf/nvmf/nvmf.o 00:03:32.141 CC examples/bdev/hello_world/hello_bdev.o 00:03:32.141 LINK lsvmd 00:03:32.401 LINK hello_blob 00:03:32.401 LINK ioat_perf 00:03:32.401 LINK hello_sock 00:03:32.401 LINK hello_world 00:03:32.401 LINK hello_bdev 00:03:32.401 LINK nvmf 00:03:32.401 LINK spdk_trace 00:03:32.401 CC examples/vmd/led/led.o 00:03:32.401 LINK dif 00:03:32.660 LINK accel_perf 00:03:32.660 CC examples/ioat/verify/verify.o 00:03:32.660 CC examples/nvme/reconnect/reconnect.o 00:03:32.660 LINK led 00:03:32.660 CC examples/blob/cli/blobcli.o 00:03:32.660 CC examples/util/zipf/zipf.o 00:03:32.660 CC examples/bdev/bdevperf/bdevperf.o 00:03:32.660 CC app/trace_record/trace_record.o 00:03:32.918 LINK zipf 00:03:32.918 CC examples/thread/thread/thread_ex.o 00:03:32.918 LINK verify 00:03:32.918 CC app/nvmf_tgt/nvmf_main.o 00:03:32.918 CC test/app/bdev_svc/bdev_svc.o 00:03:32.918 LINK reconnect 00:03:32.918 CC examples/idxd/perf/perf.o 00:03:32.918 LINK spdk_trace_record 00:03:33.176 LINK nvmf_tgt 00:03:33.176 LINK thread 00:03:33.176 CC app/iscsi_tgt/iscsi_tgt.o 00:03:33.176 CC app/spdk_tgt/spdk_tgt.o 00:03:33.176 LINK bdev_svc 00:03:33.176 LINK blobcli 00:03:33.176 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:33.176 CC app/spdk_lspci/spdk_lspci.o 00:03:33.435 LINK iscsi_tgt 00:03:33.435 LINK idxd_perf 00:03:33.435 LINK spdk_tgt 00:03:33.435 LINK spdk_lspci 00:03:33.435 CC test/app/histogram_perf/histogram_perf.o 00:03:33.435 CC test/bdev/bdevio/bdevio.o 00:03:33.435 CC test/blobfs/mkfs/mkfs.o 00:03:33.435 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:33.694 LINK bdevperf 00:03:33.694 CC app/spdk_nvme_perf/perf.o 00:03:33.694 TEST_HEADER include/spdk/accel.h 00:03:33.694 TEST_HEADER include/spdk/accel_module.h 00:03:33.694 TEST_HEADER include/spdk/assert.h 00:03:33.694 TEST_HEADER include/spdk/barrier.h 00:03:33.694 LINK histogram_perf 00:03:33.694 TEST_HEADER include/spdk/base64.h 00:03:33.694 TEST_HEADER include/spdk/bdev.h 00:03:33.694 TEST_HEADER include/spdk/bdev_module.h 00:03:33.694 TEST_HEADER include/spdk/bdev_zone.h 00:03:33.694 TEST_HEADER include/spdk/bit_array.h 00:03:33.694 TEST_HEADER include/spdk/bit_pool.h 00:03:33.694 TEST_HEADER include/spdk/blob_bdev.h 00:03:33.694 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:33.694 TEST_HEADER include/spdk/blobfs.h 00:03:33.694 TEST_HEADER include/spdk/blob.h 00:03:33.694 TEST_HEADER include/spdk/conf.h 00:03:33.694 TEST_HEADER include/spdk/config.h 00:03:33.694 TEST_HEADER include/spdk/cpuset.h 00:03:33.694 TEST_HEADER include/spdk/crc16.h 00:03:33.694 TEST_HEADER include/spdk/crc32.h 00:03:33.694 TEST_HEADER include/spdk/crc64.h 00:03:33.694 TEST_HEADER include/spdk/dif.h 00:03:33.694 TEST_HEADER include/spdk/dma.h 00:03:33.694 TEST_HEADER include/spdk/endian.h 00:03:33.694 TEST_HEADER include/spdk/env_dpdk.h 00:03:33.694 TEST_HEADER include/spdk/env.h 00:03:33.694 TEST_HEADER include/spdk/event.h 00:03:33.694 TEST_HEADER include/spdk/fd_group.h 00:03:33.694 TEST_HEADER include/spdk/fd.h 00:03:33.694 TEST_HEADER include/spdk/file.h 00:03:33.694 TEST_HEADER include/spdk/ftl.h 00:03:33.694 TEST_HEADER include/spdk/gpt_spec.h 00:03:33.694 TEST_HEADER include/spdk/hexlify.h 00:03:33.694 TEST_HEADER include/spdk/histogram_data.h 00:03:33.694 TEST_HEADER include/spdk/idxd.h 00:03:33.694 TEST_HEADER include/spdk/idxd_spec.h 00:03:33.694 TEST_HEADER include/spdk/init.h 00:03:33.694 TEST_HEADER include/spdk/ioat.h 00:03:33.694 TEST_HEADER include/spdk/ioat_spec.h 00:03:33.694 TEST_HEADER include/spdk/iscsi_spec.h 00:03:33.694 TEST_HEADER include/spdk/json.h 00:03:33.694 TEST_HEADER include/spdk/jsonrpc.h 00:03:33.694 TEST_HEADER include/spdk/keyring.h 00:03:33.694 TEST_HEADER include/spdk/keyring_module.h 00:03:33.694 TEST_HEADER include/spdk/likely.h 00:03:33.694 TEST_HEADER include/spdk/log.h 00:03:33.694 TEST_HEADER include/spdk/lvol.h 00:03:33.694 TEST_HEADER include/spdk/memory.h 00:03:33.694 TEST_HEADER include/spdk/mmio.h 00:03:33.694 TEST_HEADER include/spdk/nbd.h 00:03:33.694 TEST_HEADER include/spdk/notify.h 00:03:33.694 LINK mkfs 00:03:33.694 TEST_HEADER include/spdk/nvme.h 00:03:33.694 TEST_HEADER include/spdk/nvme_intel.h 00:03:33.694 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:33.694 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:33.694 TEST_HEADER include/spdk/nvme_spec.h 00:03:33.694 TEST_HEADER include/spdk/nvme_zns.h 00:03:33.694 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:33.694 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:33.694 TEST_HEADER include/spdk/nvmf.h 00:03:33.694 TEST_HEADER include/spdk/nvmf_spec.h 00:03:33.694 TEST_HEADER include/spdk/nvmf_transport.h 00:03:33.694 TEST_HEADER include/spdk/opal.h 00:03:33.694 TEST_HEADER include/spdk/opal_spec.h 00:03:33.694 TEST_HEADER include/spdk/pci_ids.h 00:03:33.694 TEST_HEADER include/spdk/pipe.h 00:03:33.694 TEST_HEADER include/spdk/queue.h 00:03:33.694 TEST_HEADER include/spdk/reduce.h 00:03:33.694 TEST_HEADER include/spdk/rpc.h 00:03:33.694 TEST_HEADER include/spdk/scheduler.h 00:03:33.694 TEST_HEADER include/spdk/scsi.h 00:03:33.694 TEST_HEADER include/spdk/scsi_spec.h 00:03:33.694 TEST_HEADER include/spdk/sock.h 00:03:33.694 TEST_HEADER include/spdk/stdinc.h 00:03:33.694 TEST_HEADER include/spdk/string.h 00:03:33.694 TEST_HEADER include/spdk/thread.h 00:03:33.694 TEST_HEADER include/spdk/trace.h 00:03:33.694 TEST_HEADER include/spdk/trace_parser.h 00:03:33.694 TEST_HEADER include/spdk/tree.h 00:03:33.694 TEST_HEADER include/spdk/ublk.h 00:03:33.694 TEST_HEADER include/spdk/util.h 00:03:33.694 TEST_HEADER include/spdk/uuid.h 00:03:33.694 TEST_HEADER include/spdk/version.h 00:03:33.694 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:33.694 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:33.694 TEST_HEADER include/spdk/vhost.h 00:03:33.694 TEST_HEADER include/spdk/vmd.h 00:03:33.694 TEST_HEADER include/spdk/xor.h 00:03:33.694 TEST_HEADER include/spdk/zipf.h 00:03:33.694 CXX test/cpp_headers/accel.o 00:03:33.694 CXX test/cpp_headers/accel_module.o 00:03:33.694 CC test/dma/test_dma/test_dma.o 00:03:33.953 LINK nvme_manage 00:03:33.953 CC test/env/mem_callbacks/mem_callbacks.o 00:03:33.953 LINK bdevio 00:03:33.953 CXX test/cpp_headers/assert.o 00:03:33.953 CC test/event/event_perf/event_perf.o 00:03:33.953 CC test/app/jsoncat/jsoncat.o 00:03:33.953 LINK nvme_fuzz 00:03:33.953 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:34.213 CC examples/nvme/arbitration/arbitration.o 00:03:34.213 CXX test/cpp_headers/barrier.o 00:03:34.213 LINK event_perf 00:03:34.213 LINK jsoncat 00:03:34.213 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:34.213 LINK test_dma 00:03:34.213 LINK interrupt_tgt 00:03:34.213 CC test/app/stub/stub.o 00:03:34.213 CXX test/cpp_headers/base64.o 00:03:34.213 CXX test/cpp_headers/bdev.o 00:03:34.471 CC test/event/reactor/reactor.o 00:03:34.472 LINK mem_callbacks 00:03:34.472 CXX test/cpp_headers/bdev_module.o 00:03:34.472 LINK stub 00:03:34.472 CXX test/cpp_headers/bdev_zone.o 00:03:34.472 LINK arbitration 00:03:34.472 LINK reactor 00:03:34.472 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:34.472 CXX test/cpp_headers/bit_array.o 00:03:34.472 LINK spdk_nvme_perf 00:03:34.472 CC test/env/vtophys/vtophys.o 00:03:34.472 CXX test/cpp_headers/bit_pool.o 00:03:34.731 CXX test/cpp_headers/blob_bdev.o 00:03:34.731 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:34.731 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:34.731 CC examples/nvme/hotplug/hotplug.o 00:03:34.731 LINK vtophys 00:03:34.731 CXX test/cpp_headers/blobfs_bdev.o 00:03:34.731 CXX test/cpp_headers/blobfs.o 00:03:34.731 CC app/spdk_nvme_identify/identify.o 00:03:34.731 CC test/event/reactor_perf/reactor_perf.o 00:03:34.990 LINK env_dpdk_post_init 00:03:34.990 CXX test/cpp_headers/blob.o 00:03:34.990 LINK reactor_perf 00:03:34.990 LINK hotplug 00:03:34.990 CC test/rpc_client/rpc_client_test.o 00:03:34.990 CXX test/cpp_headers/conf.o 00:03:34.990 CC test/nvme/aer/aer.o 00:03:34.990 LINK vhost_fuzz 00:03:35.250 CC test/lvol/esnap/esnap.o 00:03:35.250 CC test/env/memory/memory_ut.o 00:03:35.250 CC test/event/app_repeat/app_repeat.o 00:03:35.250 LINK rpc_client_test 00:03:35.250 CXX test/cpp_headers/config.o 00:03:35.250 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:35.250 CXX test/cpp_headers/cpuset.o 00:03:35.250 LINK app_repeat 00:03:35.250 CC app/spdk_nvme_discover/discovery_aer.o 00:03:35.509 CXX test/cpp_headers/crc16.o 00:03:35.509 LINK aer 00:03:35.509 LINK cmb_copy 00:03:35.509 CXX test/cpp_headers/crc32.o 00:03:35.509 LINK spdk_nvme_discover 00:03:35.509 CC test/thread/poller_perf/poller_perf.o 00:03:35.768 CC test/event/scheduler/scheduler.o 00:03:35.768 CC test/nvme/reset/reset.o 00:03:35.768 CC examples/nvme/abort/abort.o 00:03:35.768 CXX test/cpp_headers/crc64.o 00:03:35.768 LINK spdk_nvme_identify 00:03:35.768 LINK poller_perf 00:03:35.768 CXX test/cpp_headers/dif.o 00:03:35.768 LINK scheduler 00:03:35.768 CXX test/cpp_headers/dma.o 00:03:36.026 CC test/env/pci/pci_ut.o 00:03:36.026 LINK reset 00:03:36.026 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:36.026 CC app/spdk_top/spdk_top.o 00:03:36.026 CXX test/cpp_headers/endian.o 00:03:36.026 LINK memory_ut 00:03:36.026 LINK abort 00:03:36.026 LINK pmr_persistence 00:03:36.285 CC test/nvme/sgl/sgl.o 00:03:36.285 CC test/nvme/e2edp/nvme_dp.o 00:03:36.285 CXX test/cpp_headers/env_dpdk.o 00:03:36.285 LINK iscsi_fuzz 00:03:36.285 LINK pci_ut 00:03:36.285 CC test/nvme/overhead/overhead.o 00:03:36.285 CXX test/cpp_headers/env.o 00:03:36.285 CC test/nvme/err_injection/err_injection.o 00:03:36.285 CC test/nvme/startup/startup.o 00:03:36.543 LINK sgl 00:03:36.543 LINK nvme_dp 00:03:36.543 CXX test/cpp_headers/event.o 00:03:36.543 LINK err_injection 00:03:36.543 LINK startup 00:03:36.543 CXX test/cpp_headers/fd_group.o 00:03:36.543 LINK overhead 00:03:36.543 CXX test/cpp_headers/fd.o 00:03:36.802 CC app/vhost/vhost.o 00:03:36.802 CC test/nvme/reserve/reserve.o 00:03:36.802 CC app/spdk_dd/spdk_dd.o 00:03:36.802 CXX test/cpp_headers/file.o 00:03:36.802 CXX test/cpp_headers/ftl.o 00:03:36.802 CC test/nvme/simple_copy/simple_copy.o 00:03:36.802 CC test/nvme/connect_stress/connect_stress.o 00:03:36.802 LINK vhost 00:03:37.061 LINK reserve 00:03:37.061 LINK spdk_top 00:03:37.061 CC app/fio/nvme/fio_plugin.o 00:03:37.061 CXX test/cpp_headers/gpt_spec.o 00:03:37.061 CC test/nvme/boot_partition/boot_partition.o 00:03:37.061 LINK connect_stress 00:03:37.061 LINK simple_copy 00:03:37.061 CXX test/cpp_headers/hexlify.o 00:03:37.061 LINK spdk_dd 00:03:37.320 LINK boot_partition 00:03:37.320 CC app/fio/bdev/fio_plugin.o 00:03:37.320 CC test/nvme/compliance/nvme_compliance.o 00:03:37.320 CC test/nvme/fused_ordering/fused_ordering.o 00:03:37.320 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:37.320 CXX test/cpp_headers/histogram_data.o 00:03:37.320 CC test/nvme/fdp/fdp.o 00:03:37.320 CXX test/cpp_headers/idxd.o 00:03:37.320 LINK fused_ordering 00:03:37.589 LINK doorbell_aers 00:03:37.589 CXX test/cpp_headers/idxd_spec.o 00:03:37.589 CC test/nvme/cuse/cuse.o 00:03:37.589 CXX test/cpp_headers/init.o 00:03:37.589 LINK nvme_compliance 00:03:37.589 CXX test/cpp_headers/ioat.o 00:03:37.589 LINK spdk_nvme 00:03:37.589 CXX test/cpp_headers/ioat_spec.o 00:03:37.589 CXX test/cpp_headers/iscsi_spec.o 00:03:37.853 CXX test/cpp_headers/json.o 00:03:37.853 LINK fdp 00:03:37.853 CXX test/cpp_headers/jsonrpc.o 00:03:37.853 LINK spdk_bdev 00:03:37.853 CXX test/cpp_headers/keyring.o 00:03:37.853 CXX test/cpp_headers/keyring_module.o 00:03:37.853 CXX test/cpp_headers/likely.o 00:03:37.853 CXX test/cpp_headers/log.o 00:03:37.853 CXX test/cpp_headers/lvol.o 00:03:37.853 CXX test/cpp_headers/memory.o 00:03:37.853 CXX test/cpp_headers/mmio.o 00:03:37.853 CXX test/cpp_headers/nbd.o 00:03:37.853 CXX test/cpp_headers/notify.o 00:03:37.853 CXX test/cpp_headers/nvme.o 00:03:37.853 CXX test/cpp_headers/nvme_intel.o 00:03:37.853 CXX test/cpp_headers/nvme_ocssd.o 00:03:37.853 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:38.112 CXX test/cpp_headers/nvme_spec.o 00:03:38.112 CXX test/cpp_headers/nvme_zns.o 00:03:38.112 CXX test/cpp_headers/nvmf_cmd.o 00:03:38.112 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:38.112 CXX test/cpp_headers/nvmf.o 00:03:38.112 CXX test/cpp_headers/nvmf_spec.o 00:03:38.112 CXX test/cpp_headers/nvmf_transport.o 00:03:38.112 CXX test/cpp_headers/opal.o 00:03:38.112 CXX test/cpp_headers/opal_spec.o 00:03:38.112 CXX test/cpp_headers/pci_ids.o 00:03:38.112 CXX test/cpp_headers/pipe.o 00:03:38.371 CXX test/cpp_headers/queue.o 00:03:38.371 CXX test/cpp_headers/reduce.o 00:03:38.371 CXX test/cpp_headers/rpc.o 00:03:38.371 CXX test/cpp_headers/scheduler.o 00:03:38.371 CXX test/cpp_headers/scsi.o 00:03:38.371 CXX test/cpp_headers/scsi_spec.o 00:03:38.371 CXX test/cpp_headers/sock.o 00:03:38.371 CXX test/cpp_headers/stdinc.o 00:03:38.371 CXX test/cpp_headers/string.o 00:03:38.371 CXX test/cpp_headers/thread.o 00:03:38.371 CXX test/cpp_headers/trace.o 00:03:38.371 CXX test/cpp_headers/trace_parser.o 00:03:38.371 CXX test/cpp_headers/tree.o 00:03:38.371 CXX test/cpp_headers/ublk.o 00:03:38.629 CXX test/cpp_headers/util.o 00:03:38.629 CXX test/cpp_headers/uuid.o 00:03:38.629 CXX test/cpp_headers/version.o 00:03:38.629 CXX test/cpp_headers/vfio_user_pci.o 00:03:38.629 CXX test/cpp_headers/vfio_user_spec.o 00:03:38.629 LINK cuse 00:03:38.629 CXX test/cpp_headers/vhost.o 00:03:38.629 CXX test/cpp_headers/vmd.o 00:03:38.629 CXX test/cpp_headers/xor.o 00:03:38.629 CXX test/cpp_headers/zipf.o 00:03:40.559 LINK esnap 00:03:40.818 00:03:40.818 real 1m11.756s 00:03:40.818 user 6m52.348s 00:03:40.818 sys 1m34.488s 00:03:40.818 19:18:06 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:03:40.818 19:18:06 -- common/autotest_common.sh@10 -- $ set +x 00:03:40.818 ************************************ 00:03:40.818 END TEST make 00:03:40.818 ************************************ 00:03:41.077 19:18:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:41.077 19:18:06 -- pm/common@30 -- $ signal_monitor_resources TERM 00:03:41.077 19:18:06 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:03:41.077 19:18:06 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.077 19:18:06 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:41.077 19:18:06 -- pm/common@45 -- $ pid=5391 00:03:41.077 19:18:06 -- pm/common@52 -- $ sudo kill -TERM 5391 00:03:41.077 19:18:06 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.077 19:18:06 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:41.077 19:18:06 -- pm/common@45 -- $ pid=5390 00:03:41.077 19:18:06 -- pm/common@52 -- $ sudo kill -TERM 5390 00:03:41.077 19:18:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:41.077 19:18:06 -- nvmf/common.sh@7 -- # uname -s 00:03:41.077 19:18:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:41.077 19:18:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:41.077 19:18:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:41.077 19:18:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:41.077 19:18:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:41.077 19:18:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:41.077 19:18:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:41.077 19:18:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:41.077 19:18:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:41.077 19:18:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:41.077 19:18:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:84fe9573-d922-4396-b597-209883f76b96 00:03:41.077 19:18:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=84fe9573-d922-4396-b597-209883f76b96 00:03:41.077 19:18:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:41.077 19:18:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:41.077 19:18:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:41.077 19:18:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:41.077 19:18:06 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:41.077 19:18:06 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:41.077 19:18:06 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:41.077 19:18:06 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:41.077 19:18:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.077 19:18:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.077 19:18:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.077 19:18:06 -- paths/export.sh@5 -- # export PATH 00:03:41.077 19:18:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:41.077 19:18:06 -- nvmf/common.sh@47 -- # : 0 00:03:41.077 19:18:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:41.077 19:18:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:41.077 19:18:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:41.077 19:18:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:41.077 19:18:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:41.077 19:18:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:41.077 19:18:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:41.077 19:18:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:41.077 19:18:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:41.077 19:18:06 -- spdk/autotest.sh@32 -- # uname -s 00:03:41.077 19:18:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:41.077 19:18:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:41.078 19:18:06 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:41.337 19:18:06 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:41.337 19:18:06 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:41.337 19:18:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:41.337 19:18:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:41.337 19:18:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:41.337 19:18:06 -- spdk/autotest.sh@48 -- # udevadm_pid=53264 00:03:41.337 19:18:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:41.337 19:18:06 -- pm/common@17 -- # local monitor 00:03:41.337 19:18:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.337 19:18:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:41.337 19:18:06 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=53268 00:03:41.337 19:18:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:41.337 19:18:06 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=53271 00:03:41.337 19:18:06 -- pm/common@26 -- # sleep 1 00:03:41.337 19:18:06 -- pm/common@21 -- # date +%s 00:03:41.337 19:18:06 -- pm/common@21 -- # date +%s 00:03:41.337 19:18:06 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713986286 00:03:41.337 19:18:06 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713986286 00:03:41.337 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713986286_collect-vmstat.pm.log 00:03:41.337 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713986286_collect-cpu-load.pm.log 00:03:42.275 19:18:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:42.275 19:18:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:42.275 19:18:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:42.275 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:03:42.275 19:18:07 -- spdk/autotest.sh@59 -- # create_test_list 00:03:42.275 19:18:07 -- common/autotest_common.sh@734 -- # xtrace_disable 00:03:42.275 19:18:07 -- common/autotest_common.sh@10 -- # set +x 00:03:42.275 19:18:07 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:42.275 19:18:07 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:42.275 19:18:07 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:42.275 19:18:07 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:42.275 19:18:07 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:42.275 19:18:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:42.275 19:18:07 -- common/autotest_common.sh@1441 -- # uname 00:03:42.275 19:18:07 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:03:42.275 19:18:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:42.275 19:18:07 -- common/autotest_common.sh@1461 -- # uname 00:03:42.275 19:18:07 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:03:42.275 19:18:07 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:42.275 19:18:07 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:42.275 19:18:07 -- spdk/autotest.sh@72 -- # hash lcov 00:03:42.275 19:18:07 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:42.275 19:18:07 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:42.275 --rc lcov_branch_coverage=1 00:03:42.275 --rc lcov_function_coverage=1 00:03:42.275 --rc genhtml_branch_coverage=1 00:03:42.275 --rc genhtml_function_coverage=1 00:03:42.275 --rc genhtml_legend=1 00:03:42.275 --rc geninfo_all_blocks=1 00:03:42.275 ' 00:03:42.275 19:18:07 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:42.275 --rc lcov_branch_coverage=1 00:03:42.275 --rc lcov_function_coverage=1 00:03:42.275 --rc genhtml_branch_coverage=1 00:03:42.275 --rc genhtml_function_coverage=1 00:03:42.275 --rc genhtml_legend=1 00:03:42.275 --rc geninfo_all_blocks=1 00:03:42.275 ' 00:03:42.275 19:18:07 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:42.275 --rc lcov_branch_coverage=1 00:03:42.275 --rc lcov_function_coverage=1 00:03:42.275 --rc genhtml_branch_coverage=1 00:03:42.275 --rc genhtml_function_coverage=1 00:03:42.275 --rc genhtml_legend=1 00:03:42.275 --rc geninfo_all_blocks=1 00:03:42.275 --no-external' 00:03:42.275 19:18:07 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:42.275 --rc lcov_branch_coverage=1 00:03:42.275 --rc lcov_function_coverage=1 00:03:42.275 --rc genhtml_branch_coverage=1 00:03:42.275 --rc genhtml_function_coverage=1 00:03:42.275 --rc genhtml_legend=1 00:03:42.275 --rc geninfo_all_blocks=1 00:03:42.275 --no-external' 00:03:42.275 19:18:07 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:42.535 lcov: LCOV version 1.14 00:03:42.535 19:18:07 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:50.656 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:50.656 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:50.656 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:50.656 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:50.656 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:50.657 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:55.933 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:55.933 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:08.194 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:08.194 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:08.195 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:08.195 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:08.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:08.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:08.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:08.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:08.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:08.196 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:08.196 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:10.730 19:18:36 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:10.730 19:18:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:10.730 19:18:36 -- common/autotest_common.sh@10 -- # set +x 00:04:10.730 19:18:36 -- spdk/autotest.sh@91 -- # rm -f 00:04:10.730 19:18:36 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:10.989 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.558 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:11.817 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:11.817 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:11.817 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:11.817 19:18:37 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:11.817 19:18:37 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:11.817 19:18:37 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:11.817 19:18:37 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:11.817 19:18:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:11.817 19:18:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:11.817 19:18:37 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:11.817 19:18:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:11.817 19:18:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:11.817 19:18:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:11.817 19:18:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:11.817 19:18:37 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:11.817 19:18:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:11.817 19:18:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:11.817 19:18:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:11.817 19:18:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:11.817 19:18:37 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:11.817 19:18:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:11.817 19:18:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:11.818 19:18:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:11.818 19:18:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:04:11.818 19:18:37 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:04:11.818 19:18:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:11.818 19:18:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:11.818 19:18:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:11.818 19:18:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:04:11.818 19:18:37 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:04:11.818 19:18:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:11.818 19:18:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:11.818 19:18:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:11.818 19:18:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:11.818 19:18:37 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:11.818 19:18:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:11.818 19:18:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:11.818 19:18:37 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:11.818 19:18:37 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:11.818 19:18:37 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:11.818 19:18:37 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:11.818 19:18:37 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:11.818 19:18:37 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:11.818 19:18:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:11.818 19:18:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:11.818 19:18:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:11.818 19:18:37 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:11.818 19:18:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:11.818 No valid GPT data, bailing 00:04:11.818 19:18:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:11.818 19:18:37 -- scripts/common.sh@391 -- # pt= 00:04:11.818 19:18:37 -- scripts/common.sh@392 -- # return 1 00:04:11.818 19:18:37 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:11.818 1+0 records in 00:04:11.818 1+0 records out 00:04:11.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159721 s, 65.7 MB/s 00:04:11.818 19:18:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:11.818 19:18:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:11.818 19:18:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:04:11.818 19:18:37 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:04:11.818 19:18:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:11.818 No valid GPT data, bailing 00:04:12.077 19:18:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:12.077 19:18:37 -- scripts/common.sh@391 -- # pt= 00:04:12.077 19:18:37 -- scripts/common.sh@392 -- # return 1 00:04:12.077 19:18:37 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:12.077 1+0 records in 00:04:12.077 1+0 records out 00:04:12.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500068 s, 210 MB/s 00:04:12.077 19:18:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.077 19:18:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:12.077 19:18:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:04:12.077 19:18:37 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:04:12.077 19:18:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:12.077 No valid GPT data, bailing 00:04:12.077 19:18:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:12.077 19:18:37 -- scripts/common.sh@391 -- # pt= 00:04:12.077 19:18:37 -- scripts/common.sh@392 -- # return 1 00:04:12.077 19:18:37 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:12.077 1+0 records in 00:04:12.077 1+0 records out 00:04:12.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00655628 s, 160 MB/s 00:04:12.078 19:18:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.078 19:18:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:12.078 19:18:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:04:12.078 19:18:37 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:04:12.078 19:18:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:12.078 No valid GPT data, bailing 00:04:12.078 19:18:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:12.078 19:18:37 -- scripts/common.sh@391 -- # pt= 00:04:12.078 19:18:37 -- scripts/common.sh@392 -- # return 1 00:04:12.078 19:18:37 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:12.078 1+0 records in 00:04:12.078 1+0 records out 00:04:12.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00581225 s, 180 MB/s 00:04:12.078 19:18:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.078 19:18:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:12.078 19:18:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:04:12.078 19:18:37 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:04:12.078 19:18:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:12.078 No valid GPT data, bailing 00:04:12.078 19:18:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:12.078 19:18:37 -- scripts/common.sh@391 -- # pt= 00:04:12.078 19:18:37 -- scripts/common.sh@392 -- # return 1 00:04:12.078 19:18:37 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:12.078 1+0 records in 00:04:12.078 1+0 records out 00:04:12.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00600409 s, 175 MB/s 00:04:12.078 19:18:37 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:12.078 19:18:37 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:12.078 19:18:37 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:04:12.078 19:18:37 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:04:12.078 19:18:37 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:12.336 No valid GPT data, bailing 00:04:12.336 19:18:37 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:12.336 19:18:37 -- scripts/common.sh@391 -- # pt= 00:04:12.336 19:18:37 -- scripts/common.sh@392 -- # return 1 00:04:12.336 19:18:37 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:12.336 1+0 records in 00:04:12.336 1+0 records out 00:04:12.336 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00661319 s, 159 MB/s 00:04:12.336 19:18:37 -- spdk/autotest.sh@118 -- # sync 00:04:12.336 19:18:37 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:12.336 19:18:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:12.336 19:18:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:14.868 19:18:40 -- spdk/autotest.sh@124 -- # uname -s 00:04:14.868 19:18:40 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:14.868 19:18:40 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:14.868 19:18:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.868 19:18:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.868 19:18:40 -- common/autotest_common.sh@10 -- # set +x 00:04:14.868 ************************************ 00:04:14.868 START TEST setup.sh 00:04:14.868 ************************************ 00:04:14.868 19:18:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:15.129 * Looking for test storage... 00:04:15.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:15.129 19:18:40 -- setup/test-setup.sh@10 -- # uname -s 00:04:15.129 19:18:40 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:15.129 19:18:40 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:15.129 19:18:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:15.129 19:18:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:15.129 19:18:40 -- common/autotest_common.sh@10 -- # set +x 00:04:15.129 ************************************ 00:04:15.129 START TEST acl 00:04:15.129 ************************************ 00:04:15.129 19:18:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:15.129 * Looking for test storage... 00:04:15.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:15.388 19:18:40 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:15.388 19:18:40 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:15.388 19:18:40 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:15.388 19:18:40 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:15.388 19:18:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:15.388 19:18:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:15.388 19:18:40 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:15.388 19:18:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.388 19:18:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:15.388 19:18:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:15.388 19:18:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:15.388 19:18:40 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:15.388 19:18:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:15.388 19:18:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:15.388 19:18:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:15.388 19:18:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:15.388 19:18:40 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:15.388 19:18:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:15.388 19:18:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:15.388 19:18:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:15.388 19:18:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:04:15.388 19:18:40 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:04:15.388 19:18:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:15.388 19:18:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:15.388 19:18:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:15.388 19:18:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:04:15.388 19:18:40 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:04:15.388 19:18:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:15.388 19:18:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:15.388 19:18:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:15.388 19:18:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:15.388 19:18:40 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:15.388 19:18:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:15.388 19:18:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:15.388 19:18:40 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:15.388 19:18:40 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:15.388 19:18:40 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:15.388 19:18:40 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:15.388 19:18:40 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:15.388 19:18:40 -- setup/acl.sh@12 -- # devs=() 00:04:15.388 19:18:40 -- setup/acl.sh@12 -- # declare -a devs 00:04:15.388 19:18:40 -- setup/acl.sh@13 -- # drivers=() 00:04:15.388 19:18:40 -- setup/acl.sh@13 -- # declare -A drivers 00:04:15.388 19:18:40 -- setup/acl.sh@51 -- # setup reset 00:04:15.388 19:18:40 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.388 19:18:40 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:16.768 19:18:42 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:16.768 19:18:42 -- setup/acl.sh@16 -- # local dev driver 00:04:16.768 19:18:42 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:16.768 19:18:42 -- setup/acl.sh@15 -- # setup output status 00:04:16.768 19:18:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:16.768 19:18:42 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:17.337 19:18:42 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:04:17.337 19:18:42 -- setup/acl.sh@19 -- # continue 00:04:17.337 19:18:42 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:17.906 Hugepages 00:04:17.906 node hugesize free / total 00:04:17.906 19:18:43 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:17.906 19:18:43 -- setup/acl.sh@19 -- # continue 00:04:17.906 19:18:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:17.906 00:04:17.906 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:17.906 19:18:43 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:17.906 19:18:43 -- setup/acl.sh@19 -- # continue 00:04:17.906 19:18:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:17.906 19:18:43 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:17.906 19:18:43 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:17.906 19:18:43 -- setup/acl.sh@20 -- # continue 00:04:17.906 19:18:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:17.906 19:18:43 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:04:17.906 19:18:43 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:17.906 19:18:43 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:17.906 19:18:43 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:17.906 19:18:43 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:17.906 19:18:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.196 19:18:43 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:04:18.196 19:18:43 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:18.196 19:18:43 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:18.196 19:18:43 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:18.196 19:18:43 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:18.196 19:18:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.196 19:18:43 -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:04:18.196 19:18:43 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:18.196 19:18:43 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:18.196 19:18:43 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:18.196 19:18:43 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:18.196 19:18:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.196 19:18:43 -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:04:18.196 19:18:43 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:18.196 19:18:43 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:18.196 19:18:43 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:18.196 19:18:43 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:18.196 19:18:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:18.196 19:18:43 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:04:18.196 19:18:43 -- setup/acl.sh@54 -- # run_test denied denied 00:04:18.196 19:18:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.196 19:18:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.196 19:18:43 -- common/autotest_common.sh@10 -- # set +x 00:04:18.476 ************************************ 00:04:18.476 START TEST denied 00:04:18.476 ************************************ 00:04:18.476 19:18:43 -- common/autotest_common.sh@1111 -- # denied 00:04:18.476 19:18:43 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:04:18.476 19:18:43 -- setup/acl.sh@38 -- # setup output config 00:04:18.476 19:18:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.476 19:18:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:18.476 19:18:43 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:04:19.855 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:04:19.855 19:18:45 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:04:19.855 19:18:45 -- setup/acl.sh@28 -- # local dev driver 00:04:19.855 19:18:45 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:19.855 19:18:45 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:04:19.855 19:18:45 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:04:19.855 19:18:45 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:19.855 19:18:45 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:19.855 19:18:45 -- setup/acl.sh@41 -- # setup reset 00:04:19.855 19:18:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.855 19:18:45 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.428 00:04:26.428 real 0m7.588s 00:04:26.428 user 0m0.945s 00:04:26.428 sys 0m1.754s 00:04:26.428 19:18:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:26.428 19:18:51 -- common/autotest_common.sh@10 -- # set +x 00:04:26.428 ************************************ 00:04:26.428 END TEST denied 00:04:26.428 ************************************ 00:04:26.428 19:18:51 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:26.428 19:18:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.428 19:18:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.428 19:18:51 -- common/autotest_common.sh@10 -- # set +x 00:04:26.428 ************************************ 00:04:26.429 START TEST allowed 00:04:26.429 ************************************ 00:04:26.429 19:18:51 -- common/autotest_common.sh@1111 -- # allowed 00:04:26.429 19:18:51 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:04:26.429 19:18:51 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:04:26.429 19:18:51 -- setup/acl.sh@45 -- # setup output config 00:04:26.429 19:18:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.429 19:18:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.366 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.366 19:18:52 -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:27.366 19:18:52 -- setup/acl.sh@28 -- # local dev driver 00:04:27.366 19:18:52 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:27.366 19:18:52 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:04:27.366 19:18:52 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:04:27.366 19:18:52 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:27.366 19:18:52 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:27.366 19:18:52 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:27.366 19:18:52 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:04:27.366 19:18:52 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:04:27.367 19:18:52 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:27.367 19:18:52 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:27.367 19:18:52 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:27.367 19:18:52 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:04:27.367 19:18:52 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:04:27.367 19:18:52 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:27.367 19:18:52 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:27.367 19:18:52 -- setup/acl.sh@48 -- # setup reset 00:04:27.367 19:18:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.367 19:18:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:28.743 00:04:28.743 real 0m2.578s 00:04:28.743 user 0m1.018s 00:04:28.743 sys 0m1.575s 00:04:28.743 19:18:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:28.743 19:18:54 -- common/autotest_common.sh@10 -- # set +x 00:04:28.743 ************************************ 00:04:28.743 END TEST allowed 00:04:28.743 ************************************ 00:04:28.743 00:04:28.743 real 0m13.591s 00:04:28.743 user 0m3.422s 00:04:28.743 sys 0m5.285s 00:04:28.743 19:18:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:28.743 19:18:54 -- common/autotest_common.sh@10 -- # set +x 00:04:28.743 ************************************ 00:04:28.743 END TEST acl 00:04:28.743 ************************************ 00:04:28.743 19:18:54 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:28.743 19:18:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:28.743 19:18:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:28.743 19:18:54 -- common/autotest_common.sh@10 -- # set +x 00:04:28.743 ************************************ 00:04:28.743 START TEST hugepages 00:04:28.743 ************************************ 00:04:28.743 19:18:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:29.021 * Looking for test storage... 00:04:29.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:29.021 19:18:54 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:29.021 19:18:54 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:29.021 19:18:54 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:29.021 19:18:54 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:29.021 19:18:54 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:29.021 19:18:54 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:29.021 19:18:54 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:29.021 19:18:54 -- setup/common.sh@18 -- # local node= 00:04:29.021 19:18:54 -- setup/common.sh@19 -- # local var val 00:04:29.021 19:18:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:29.021 19:18:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.021 19:18:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.021 19:18:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.021 19:18:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.021 19:18:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5402344 kB' 'MemAvailable: 7398616 kB' 'Buffers: 2436 kB' 'Cached: 2207884 kB' 'SwapCached: 0 kB' 'Active: 854272 kB' 'Inactive: 1473420 kB' 'Active(anon): 127884 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473420 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 664 kB' 'Writeback: 0 kB' 'AnonPages: 119004 kB' 'Mapped: 48736 kB' 'Shmem: 10512 kB' 'KReclaimable: 66804 kB' 'Slab: 143968 kB' 'SReclaimable: 66804 kB' 'SUnreclaim: 77164 kB' 'KernelStack: 6380 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 346644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54984 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.021 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.021 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # continue 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:29.022 19:18:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:29.022 19:18:54 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:29.022 19:18:54 -- setup/common.sh@33 -- # echo 2048 00:04:29.022 19:18:54 -- setup/common.sh@33 -- # return 0 00:04:29.022 19:18:54 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:29.022 19:18:54 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:29.022 19:18:54 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:29.022 19:18:54 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:29.022 19:18:54 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:29.022 19:18:54 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:29.022 19:18:54 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:29.022 19:18:54 -- setup/hugepages.sh@207 -- # get_nodes 00:04:29.022 19:18:54 -- setup/hugepages.sh@27 -- # local node 00:04:29.022 19:18:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.022 19:18:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:29.022 19:18:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:29.022 19:18:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.022 19:18:54 -- setup/hugepages.sh@208 -- # clear_hp 00:04:29.022 19:18:54 -- setup/hugepages.sh@37 -- # local node hp 00:04:29.022 19:18:54 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:29.022 19:18:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.022 19:18:54 -- setup/hugepages.sh@41 -- # echo 0 00:04:29.022 19:18:54 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:29.022 19:18:54 -- setup/hugepages.sh@41 -- # echo 0 00:04:29.022 19:18:54 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:29.022 19:18:54 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:29.022 19:18:54 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:29.022 19:18:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:29.022 19:18:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:29.022 19:18:54 -- common/autotest_common.sh@10 -- # set +x 00:04:29.022 ************************************ 00:04:29.022 START TEST default_setup 00:04:29.022 ************************************ 00:04:29.022 19:18:54 -- common/autotest_common.sh@1111 -- # default_setup 00:04:29.022 19:18:54 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:29.022 19:18:54 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:29.022 19:18:54 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:29.022 19:18:54 -- setup/hugepages.sh@51 -- # shift 00:04:29.022 19:18:54 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:29.022 19:18:54 -- setup/hugepages.sh@52 -- # local node_ids 00:04:29.022 19:18:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.022 19:18:54 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:29.022 19:18:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:29.022 19:18:54 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:29.022 19:18:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.022 19:18:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:29.022 19:18:54 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:29.022 19:18:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.022 19:18:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.022 19:18:54 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:29.022 19:18:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:29.022 19:18:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:29.022 19:18:54 -- setup/hugepages.sh@73 -- # return 0 00:04:29.022 19:18:54 -- setup/hugepages.sh@137 -- # setup output 00:04:29.022 19:18:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.022 19:18:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.591 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.534 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.534 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.534 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.534 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.534 19:18:56 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:30.534 19:18:56 -- setup/hugepages.sh@89 -- # local node 00:04:30.534 19:18:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.534 19:18:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.534 19:18:56 -- setup/hugepages.sh@92 -- # local surp 00:04:30.534 19:18:56 -- setup/hugepages.sh@93 -- # local resv 00:04:30.534 19:18:56 -- setup/hugepages.sh@94 -- # local anon 00:04:30.534 19:18:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.534 19:18:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.534 19:18:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.534 19:18:56 -- setup/common.sh@18 -- # local node= 00:04:30.534 19:18:56 -- setup/common.sh@19 -- # local var val 00:04:30.534 19:18:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.534 19:18:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.534 19:18:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.534 19:18:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.534 19:18:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.534 19:18:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7519420 kB' 'MemAvailable: 9515492 kB' 'Buffers: 2436 kB' 'Cached: 2207880 kB' 'SwapCached: 0 kB' 'Active: 861308 kB' 'Inactive: 1473448 kB' 'Active(anon): 134920 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473448 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 836 kB' 'Writeback: 0 kB' 'AnonPages: 126040 kB' 'Mapped: 48888 kB' 'Shmem: 10472 kB' 'KReclaimable: 66348 kB' 'Slab: 143156 kB' 'SReclaimable: 66348 kB' 'SUnreclaim: 76808 kB' 'KernelStack: 6432 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.534 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.534 19:18:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.535 19:18:56 -- setup/common.sh@33 -- # echo 0 00:04:30.535 19:18:56 -- setup/common.sh@33 -- # return 0 00:04:30.535 19:18:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:30.535 19:18:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.535 19:18:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.535 19:18:56 -- setup/common.sh@18 -- # local node= 00:04:30.535 19:18:56 -- setup/common.sh@19 -- # local var val 00:04:30.535 19:18:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.535 19:18:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.535 19:18:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.535 19:18:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.535 19:18:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.535 19:18:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7519420 kB' 'MemAvailable: 9515496 kB' 'Buffers: 2436 kB' 'Cached: 2207880 kB' 'SwapCached: 0 kB' 'Active: 861116 kB' 'Inactive: 1473452 kB' 'Active(anon): 134728 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 836 kB' 'Writeback: 0 kB' 'AnonPages: 125824 kB' 'Mapped: 48752 kB' 'Shmem: 10472 kB' 'KReclaimable: 66348 kB' 'Slab: 143168 kB' 'SReclaimable: 66348 kB' 'SUnreclaim: 76820 kB' 'KernelStack: 6432 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.535 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.535 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.536 19:18:56 -- setup/common.sh@33 -- # echo 0 00:04:30.536 19:18:56 -- setup/common.sh@33 -- # return 0 00:04:30.536 19:18:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:30.536 19:18:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.536 19:18:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.536 19:18:56 -- setup/common.sh@18 -- # local node= 00:04:30.536 19:18:56 -- setup/common.sh@19 -- # local var val 00:04:30.536 19:18:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.536 19:18:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.536 19:18:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.536 19:18:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.536 19:18:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.536 19:18:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.536 19:18:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7519420 kB' 'MemAvailable: 9515496 kB' 'Buffers: 2436 kB' 'Cached: 2207880 kB' 'SwapCached: 0 kB' 'Active: 861068 kB' 'Inactive: 1473452 kB' 'Active(anon): 134680 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 836 kB' 'Writeback: 0 kB' 'AnonPages: 126036 kB' 'Mapped: 48752 kB' 'Shmem: 10472 kB' 'KReclaimable: 66348 kB' 'Slab: 143164 kB' 'SReclaimable: 66348 kB' 'SUnreclaim: 76816 kB' 'KernelStack: 6416 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.536 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.536 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.537 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.537 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.538 19:18:56 -- setup/common.sh@33 -- # echo 0 00:04:30.538 19:18:56 -- setup/common.sh@33 -- # return 0 00:04:30.538 19:18:56 -- setup/hugepages.sh@100 -- # resv=0 00:04:30.538 19:18:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.538 nr_hugepages=1024 00:04:30.538 resv_hugepages=0 00:04:30.538 19:18:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.538 surplus_hugepages=0 00:04:30.538 19:18:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.538 anon_hugepages=0 00:04:30.538 19:18:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.538 19:18:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.538 19:18:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.538 19:18:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.538 19:18:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.538 19:18:56 -- setup/common.sh@18 -- # local node= 00:04:30.538 19:18:56 -- setup/common.sh@19 -- # local var val 00:04:30.538 19:18:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.538 19:18:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.538 19:18:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.538 19:18:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.538 19:18:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.538 19:18:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7519420 kB' 'MemAvailable: 9515496 kB' 'Buffers: 2436 kB' 'Cached: 2207880 kB' 'SwapCached: 0 kB' 'Active: 861288 kB' 'Inactive: 1473452 kB' 'Active(anon): 134900 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 836 kB' 'Writeback: 0 kB' 'AnonPages: 125976 kB' 'Mapped: 48752 kB' 'Shmem: 10472 kB' 'KReclaimable: 66348 kB' 'Slab: 143164 kB' 'SReclaimable: 66348 kB' 'SUnreclaim: 76816 kB' 'KernelStack: 6416 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.538 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.538 19:18:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.539 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.539 19:18:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.539 19:18:56 -- setup/common.sh@33 -- # echo 1024 00:04:30.539 19:18:56 -- setup/common.sh@33 -- # return 0 00:04:30.539 19:18:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.539 19:18:56 -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.539 19:18:56 -- setup/hugepages.sh@27 -- # local node 00:04:30.539 19:18:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.539 19:18:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.539 19:18:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:30.539 19:18:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.539 19:18:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.539 19:18:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.539 19:18:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.539 19:18:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.539 19:18:56 -- setup/common.sh@18 -- # local node=0 00:04:30.539 19:18:56 -- setup/common.sh@19 -- # local var val 00:04:30.539 19:18:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.539 19:18:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.539 19:18:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.539 19:18:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.539 19:18:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.539 19:18:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.540 19:18:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7519420 kB' 'MemUsed: 4722560 kB' 'SwapCached: 0 kB' 'Active: 861548 kB' 'Inactive: 1473452 kB' 'Active(anon): 135160 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 836 kB' 'Writeback: 0 kB' 'FilePages: 2210316 kB' 'Mapped: 48752 kB' 'AnonPages: 126236 kB' 'Shmem: 10472 kB' 'KernelStack: 6416 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66348 kB' 'Slab: 143164 kB' 'SReclaimable: 66348 kB' 'SUnreclaim: 76816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.540 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.540 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.800 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.800 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.801 19:18:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.801 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.801 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.801 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.801 19:18:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.801 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.801 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.801 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.801 19:18:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.801 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.801 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.801 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.801 19:18:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.801 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.801 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.801 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.801 19:18:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.801 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.801 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.801 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.801 19:18:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.801 19:18:56 -- setup/common.sh@32 -- # continue 00:04:30.801 19:18:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.801 19:18:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.801 19:18:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.801 19:18:56 -- setup/common.sh@33 -- # echo 0 00:04:30.801 19:18:56 -- setup/common.sh@33 -- # return 0 00:04:30.801 19:18:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.801 19:18:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.801 19:18:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.801 19:18:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.801 node0=1024 expecting 1024 00:04:30.801 19:18:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.801 19:18:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.801 00:04:30.801 real 0m1.544s 00:04:30.801 user 0m0.610s 00:04:30.801 sys 0m0.915s 00:04:30.801 19:18:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:30.801 19:18:56 -- common/autotest_common.sh@10 -- # set +x 00:04:30.801 ************************************ 00:04:30.801 END TEST default_setup 00:04:30.801 ************************************ 00:04:30.801 19:18:56 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:30.801 19:18:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.801 19:18:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.801 19:18:56 -- common/autotest_common.sh@10 -- # set +x 00:04:30.801 ************************************ 00:04:30.801 START TEST per_node_1G_alloc 00:04:30.801 ************************************ 00:04:30.801 19:18:56 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:04:30.801 19:18:56 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:30.801 19:18:56 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:30.801 19:18:56 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:30.801 19:18:56 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:30.801 19:18:56 -- setup/hugepages.sh@51 -- # shift 00:04:30.801 19:18:56 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:30.801 19:18:56 -- setup/hugepages.sh@52 -- # local node_ids 00:04:30.801 19:18:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.801 19:18:56 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:30.801 19:18:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:30.801 19:18:56 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:30.801 19:18:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.801 19:18:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:30.801 19:18:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:30.801 19:18:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.801 19:18:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.801 19:18:56 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:30.801 19:18:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:30.801 19:18:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:30.801 19:18:56 -- setup/hugepages.sh@73 -- # return 0 00:04:30.801 19:18:56 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:30.801 19:18:56 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:30.801 19:18:56 -- setup/hugepages.sh@146 -- # setup output 00:04:30.801 19:18:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.801 19:18:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:31.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.633 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.633 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.633 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.633 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:31.633 19:18:57 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:31.633 19:18:57 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:31.633 19:18:57 -- setup/hugepages.sh@89 -- # local node 00:04:31.633 19:18:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.633 19:18:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.633 19:18:57 -- setup/hugepages.sh@92 -- # local surp 00:04:31.633 19:18:57 -- setup/hugepages.sh@93 -- # local resv 00:04:31.633 19:18:57 -- setup/hugepages.sh@94 -- # local anon 00:04:31.633 19:18:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.633 19:18:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.634 19:18:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.634 19:18:57 -- setup/common.sh@18 -- # local node= 00:04:31.634 19:18:57 -- setup/common.sh@19 -- # local var val 00:04:31.634 19:18:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.634 19:18:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.634 19:18:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.634 19:18:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.634 19:18:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.634 19:18:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8567248 kB' 'MemAvailable: 10563324 kB' 'Buffers: 2436 kB' 'Cached: 2207880 kB' 'SwapCached: 0 kB' 'Active: 861664 kB' 'Inactive: 1473452 kB' 'Active(anon): 135276 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 992 kB' 'Writeback: 0 kB' 'AnonPages: 126144 kB' 'Mapped: 48884 kB' 'Shmem: 10472 kB' 'KReclaimable: 66348 kB' 'Slab: 143272 kB' 'SReclaimable: 66348 kB' 'SUnreclaim: 76924 kB' 'KernelStack: 6460 kB' 'PageTables: 4636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 359896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.634 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.634 19:18:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.635 19:18:57 -- setup/common.sh@33 -- # echo 0 00:04:31.635 19:18:57 -- setup/common.sh@33 -- # return 0 00:04:31.635 19:18:57 -- setup/hugepages.sh@97 -- # anon=0 00:04:31.635 19:18:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.635 19:18:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.635 19:18:57 -- setup/common.sh@18 -- # local node= 00:04:31.635 19:18:57 -- setup/common.sh@19 -- # local var val 00:04:31.635 19:18:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.635 19:18:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.635 19:18:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.635 19:18:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.635 19:18:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.635 19:18:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8567024 kB' 'MemAvailable: 10563100 kB' 'Buffers: 2436 kB' 'Cached: 2207880 kB' 'SwapCached: 0 kB' 'Active: 861252 kB' 'Inactive: 1473452 kB' 'Active(anon): 134864 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 992 kB' 'Writeback: 0 kB' 'AnonPages: 125800 kB' 'Mapped: 48760 kB' 'Shmem: 10472 kB' 'KReclaimable: 66348 kB' 'Slab: 143260 kB' 'SReclaimable: 66348 kB' 'SUnreclaim: 76912 kB' 'KernelStack: 6444 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 359896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.635 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.635 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.636 19:18:57 -- setup/common.sh@33 -- # echo 0 00:04:31.636 19:18:57 -- setup/common.sh@33 -- # return 0 00:04:31.636 19:18:57 -- setup/hugepages.sh@99 -- # surp=0 00:04:31.636 19:18:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.636 19:18:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.636 19:18:57 -- setup/common.sh@18 -- # local node= 00:04:31.636 19:18:57 -- setup/common.sh@19 -- # local var val 00:04:31.636 19:18:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.636 19:18:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.636 19:18:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.636 19:18:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.636 19:18:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.636 19:18:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8567024 kB' 'MemAvailable: 10563100 kB' 'Buffers: 2436 kB' 'Cached: 2207880 kB' 'SwapCached: 0 kB' 'Active: 861500 kB' 'Inactive: 1473452 kB' 'Active(anon): 135112 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 992 kB' 'Writeback: 0 kB' 'AnonPages: 126064 kB' 'Mapped: 48760 kB' 'Shmem: 10472 kB' 'KReclaimable: 66348 kB' 'Slab: 143256 kB' 'SReclaimable: 66348 kB' 'SUnreclaim: 76908 kB' 'KernelStack: 6444 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 359896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.636 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.636 19:18:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.637 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.637 19:18:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.638 19:18:57 -- setup/common.sh@33 -- # echo 0 00:04:31.638 19:18:57 -- setup/common.sh@33 -- # return 0 00:04:31.638 19:18:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:31.638 nr_hugepages=512 00:04:31.638 19:18:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:31.638 resv_hugepages=0 00:04:31.638 19:18:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.638 surplus_hugepages=0 00:04:31.638 19:18:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.638 anon_hugepages=0 00:04:31.638 19:18:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.638 19:18:57 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:31.638 19:18:57 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:31.638 19:18:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.638 19:18:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.638 19:18:57 -- setup/common.sh@18 -- # local node= 00:04:31.638 19:18:57 -- setup/common.sh@19 -- # local var val 00:04:31.638 19:18:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.638 19:18:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.638 19:18:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.638 19:18:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.638 19:18:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.638 19:18:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8567024 kB' 'MemAvailable: 10563100 kB' 'Buffers: 2436 kB' 'Cached: 2207880 kB' 'SwapCached: 0 kB' 'Active: 861276 kB' 'Inactive: 1473452 kB' 'Active(anon): 134888 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 992 kB' 'Writeback: 0 kB' 'AnonPages: 125804 kB' 'Mapped: 48760 kB' 'Shmem: 10472 kB' 'KReclaimable: 66348 kB' 'Slab: 143256 kB' 'SReclaimable: 66348 kB' 'SUnreclaim: 76908 kB' 'KernelStack: 6444 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 359896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.638 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.638 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.639 19:18:57 -- setup/common.sh@33 -- # echo 512 00:04:31.639 19:18:57 -- setup/common.sh@33 -- # return 0 00:04:31.639 19:18:57 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:31.639 19:18:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.639 19:18:57 -- setup/hugepages.sh@27 -- # local node 00:04:31.639 19:18:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.639 19:18:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:31.639 19:18:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:31.639 19:18:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.639 19:18:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.639 19:18:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.639 19:18:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.639 19:18:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.639 19:18:57 -- setup/common.sh@18 -- # local node=0 00:04:31.639 19:18:57 -- setup/common.sh@19 -- # local var val 00:04:31.639 19:18:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.639 19:18:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.639 19:18:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.639 19:18:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.639 19:18:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.639 19:18:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8567024 kB' 'MemUsed: 3674956 kB' 'SwapCached: 0 kB' 'Active: 861536 kB' 'Inactive: 1473452 kB' 'Active(anon): 135148 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473452 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 992 kB' 'Writeback: 0 kB' 'FilePages: 2210316 kB' 'Mapped: 48760 kB' 'AnonPages: 126064 kB' 'Shmem: 10472 kB' 'KernelStack: 6444 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66348 kB' 'Slab: 143256 kB' 'SReclaimable: 66348 kB' 'SUnreclaim: 76908 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.639 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.639 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # continue 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.640 19:18:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.640 19:18:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.640 19:18:57 -- setup/common.sh@33 -- # echo 0 00:04:31.640 19:18:57 -- setup/common.sh@33 -- # return 0 00:04:31.640 19:18:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.640 19:18:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.640 19:18:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.640 19:18:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.640 19:18:57 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:31.640 node0=512 expecting 512 00:04:31.640 19:18:57 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:31.640 00:04:31.640 real 0m0.866s 00:04:31.640 user 0m0.373s 00:04:31.640 sys 0m0.544s 00:04:31.640 19:18:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:31.640 19:18:57 -- common/autotest_common.sh@10 -- # set +x 00:04:31.640 ************************************ 00:04:31.640 END TEST per_node_1G_alloc 00:04:31.640 ************************************ 00:04:31.640 19:18:57 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:31.640 19:18:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:31.640 19:18:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:31.640 19:18:57 -- common/autotest_common.sh@10 -- # set +x 00:04:31.901 ************************************ 00:04:31.901 START TEST even_2G_alloc 00:04:31.901 ************************************ 00:04:31.901 19:18:57 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:04:31.901 19:18:57 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:31.901 19:18:57 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:31.901 19:18:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:31.901 19:18:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.901 19:18:57 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:31.901 19:18:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:31.901 19:18:57 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:31.901 19:18:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.901 19:18:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:31.901 19:18:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:31.901 19:18:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.901 19:18:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.901 19:18:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:31.901 19:18:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:31.901 19:18:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:31.901 19:18:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:31.901 19:18:57 -- setup/hugepages.sh@83 -- # : 0 00:04:31.901 19:18:57 -- setup/hugepages.sh@84 -- # : 0 00:04:31.901 19:18:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:31.901 19:18:57 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:31.901 19:18:57 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:31.901 19:18:57 -- setup/hugepages.sh@153 -- # setup output 00:04:31.901 19:18:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.901 19:18:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.469 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.469 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.469 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.469 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:32.469 19:18:58 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:32.469 19:18:58 -- setup/hugepages.sh@89 -- # local node 00:04:32.469 19:18:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.469 19:18:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.469 19:18:58 -- setup/hugepages.sh@92 -- # local surp 00:04:32.469 19:18:58 -- setup/hugepages.sh@93 -- # local resv 00:04:32.469 19:18:58 -- setup/hugepages.sh@94 -- # local anon 00:04:32.469 19:18:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.469 19:18:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.469 19:18:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.469 19:18:58 -- setup/common.sh@18 -- # local node= 00:04:32.469 19:18:58 -- setup/common.sh@19 -- # local var val 00:04:32.469 19:18:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.469 19:18:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.469 19:18:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.469 19:18:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.469 19:18:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.469 19:18:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.469 19:18:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7516380 kB' 'MemAvailable: 9512476 kB' 'Buffers: 2436 kB' 'Cached: 2207888 kB' 'SwapCached: 0 kB' 'Active: 861908 kB' 'Inactive: 1473464 kB' 'Active(anon): 135520 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473464 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1140 kB' 'Writeback: 0 kB' 'AnonPages: 126428 kB' 'Mapped: 49048 kB' 'Shmem: 10472 kB' 'KReclaimable: 66364 kB' 'Slab: 143396 kB' 'SReclaimable: 66364 kB' 'SUnreclaim: 77032 kB' 'KernelStack: 6408 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.469 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.469 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.470 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.470 19:18:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.732 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.732 19:18:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.732 19:18:58 -- setup/common.sh@33 -- # echo 0 00:04:32.732 19:18:58 -- setup/common.sh@33 -- # return 0 00:04:32.732 19:18:58 -- setup/hugepages.sh@97 -- # anon=0 00:04:32.732 19:18:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.733 19:18:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.733 19:18:58 -- setup/common.sh@18 -- # local node= 00:04:32.733 19:18:58 -- setup/common.sh@19 -- # local var val 00:04:32.733 19:18:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.733 19:18:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.733 19:18:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.733 19:18:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.733 19:18:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.733 19:18:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7516440 kB' 'MemAvailable: 9512540 kB' 'Buffers: 2436 kB' 'Cached: 2207892 kB' 'SwapCached: 0 kB' 'Active: 861044 kB' 'Inactive: 1473468 kB' 'Active(anon): 134656 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473468 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1140 kB' 'Writeback: 0 kB' 'AnonPages: 126036 kB' 'Mapped: 48772 kB' 'Shmem: 10472 kB' 'KReclaimable: 66364 kB' 'Slab: 143384 kB' 'SReclaimable: 66364 kB' 'SUnreclaim: 77020 kB' 'KernelStack: 6416 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.733 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.733 19:18:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.734 19:18:58 -- setup/common.sh@33 -- # echo 0 00:04:32.734 19:18:58 -- setup/common.sh@33 -- # return 0 00:04:32.734 19:18:58 -- setup/hugepages.sh@99 -- # surp=0 00:04:32.734 19:18:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.734 19:18:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.734 19:18:58 -- setup/common.sh@18 -- # local node= 00:04:32.734 19:18:58 -- setup/common.sh@19 -- # local var val 00:04:32.734 19:18:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.734 19:18:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.734 19:18:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.734 19:18:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.734 19:18:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.734 19:18:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7516440 kB' 'MemAvailable: 9512540 kB' 'Buffers: 2436 kB' 'Cached: 2207892 kB' 'SwapCached: 0 kB' 'Active: 861040 kB' 'Inactive: 1473468 kB' 'Active(anon): 134652 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473468 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1140 kB' 'Writeback: 0 kB' 'AnonPages: 126036 kB' 'Mapped: 48772 kB' 'Shmem: 10472 kB' 'KReclaimable: 66364 kB' 'Slab: 143380 kB' 'SReclaimable: 66364 kB' 'SUnreclaim: 77016 kB' 'KernelStack: 6416 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55032 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.734 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.734 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.735 19:18:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.735 19:18:58 -- setup/common.sh@33 -- # echo 0 00:04:32.735 19:18:58 -- setup/common.sh@33 -- # return 0 00:04:32.735 nr_hugepages=1024 00:04:32.735 resv_hugepages=0 00:04:32.735 surplus_hugepages=0 00:04:32.735 19:18:58 -- setup/hugepages.sh@100 -- # resv=0 00:04:32.735 19:18:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.735 19:18:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.735 19:18:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.735 anon_hugepages=0 00:04:32.735 19:18:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.735 19:18:58 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.735 19:18:58 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.735 19:18:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.735 19:18:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.735 19:18:58 -- setup/common.sh@18 -- # local node= 00:04:32.735 19:18:58 -- setup/common.sh@19 -- # local var val 00:04:32.735 19:18:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.735 19:18:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.735 19:18:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.735 19:18:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.735 19:18:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.735 19:18:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.735 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7516440 kB' 'MemAvailable: 9512540 kB' 'Buffers: 2436 kB' 'Cached: 2207892 kB' 'SwapCached: 0 kB' 'Active: 861100 kB' 'Inactive: 1473468 kB' 'Active(anon): 134712 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473468 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1140 kB' 'Writeback: 0 kB' 'AnonPages: 125872 kB' 'Mapped: 48772 kB' 'Shmem: 10472 kB' 'KReclaimable: 66364 kB' 'Slab: 143380 kB' 'SReclaimable: 66364 kB' 'SUnreclaim: 77016 kB' 'KernelStack: 6432 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.736 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.736 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.737 19:18:58 -- setup/common.sh@33 -- # echo 1024 00:04:32.737 19:18:58 -- setup/common.sh@33 -- # return 0 00:04:32.737 19:18:58 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.737 19:18:58 -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.737 19:18:58 -- setup/hugepages.sh@27 -- # local node 00:04:32.737 19:18:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.737 19:18:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:32.737 19:18:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:32.737 19:18:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.737 19:18:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.737 19:18:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.737 19:18:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.737 19:18:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.737 19:18:58 -- setup/common.sh@18 -- # local node=0 00:04:32.737 19:18:58 -- setup/common.sh@19 -- # local var val 00:04:32.737 19:18:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.737 19:18:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.737 19:18:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.737 19:18:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.737 19:18:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.737 19:18:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7516440 kB' 'MemUsed: 4725540 kB' 'SwapCached: 0 kB' 'Active: 861044 kB' 'Inactive: 1473468 kB' 'Active(anon): 134656 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473468 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1140 kB' 'Writeback: 0 kB' 'FilePages: 2210328 kB' 'Mapped: 48772 kB' 'AnonPages: 126036 kB' 'Shmem: 10472 kB' 'KernelStack: 6416 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66364 kB' 'Slab: 143376 kB' 'SReclaimable: 66364 kB' 'SUnreclaim: 77012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.737 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.737 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # continue 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.738 19:18:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.738 19:18:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.738 19:18:58 -- setup/common.sh@33 -- # echo 0 00:04:32.738 19:18:58 -- setup/common.sh@33 -- # return 0 00:04:32.738 19:18:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.738 19:18:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.738 19:18:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.738 node0=1024 expecting 1024 00:04:32.738 ************************************ 00:04:32.738 END TEST even_2G_alloc 00:04:32.738 ************************************ 00:04:32.738 19:18:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.738 19:18:58 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:32.738 19:18:58 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:32.738 00:04:32.738 real 0m0.909s 00:04:32.738 user 0m0.424s 00:04:32.738 sys 0m0.527s 00:04:32.738 19:18:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:32.738 19:18:58 -- common/autotest_common.sh@10 -- # set +x 00:04:32.738 19:18:58 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:32.738 19:18:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:32.738 19:18:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:32.738 19:18:58 -- common/autotest_common.sh@10 -- # set +x 00:04:32.998 ************************************ 00:04:32.998 START TEST odd_alloc 00:04:32.998 ************************************ 00:04:32.998 19:18:58 -- common/autotest_common.sh@1111 -- # odd_alloc 00:04:32.998 19:18:58 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:32.998 19:18:58 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:32.998 19:18:58 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:32.999 19:18:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.999 19:18:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:32.999 19:18:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:32.999 19:18:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.999 19:18:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.999 19:18:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:32.999 19:18:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:32.999 19:18:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.999 19:18:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.999 19:18:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.999 19:18:58 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:32.999 19:18:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.999 19:18:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:32.999 19:18:58 -- setup/hugepages.sh@83 -- # : 0 00:04:32.999 19:18:58 -- setup/hugepages.sh@84 -- # : 0 00:04:32.999 19:18:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.999 19:18:58 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:32.999 19:18:58 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:32.999 19:18:58 -- setup/hugepages.sh@160 -- # setup output 00:04:32.999 19:18:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.999 19:18:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:33.257 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:33.518 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.518 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.518 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.518 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:33.518 19:18:59 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:33.518 19:18:59 -- setup/hugepages.sh@89 -- # local node 00:04:33.518 19:18:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.518 19:18:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.518 19:18:59 -- setup/hugepages.sh@92 -- # local surp 00:04:33.518 19:18:59 -- setup/hugepages.sh@93 -- # local resv 00:04:33.518 19:18:59 -- setup/hugepages.sh@94 -- # local anon 00:04:33.518 19:18:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.519 19:18:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.519 19:18:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.519 19:18:59 -- setup/common.sh@18 -- # local node= 00:04:33.519 19:18:59 -- setup/common.sh@19 -- # local var val 00:04:33.519 19:18:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.519 19:18:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.519 19:18:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.519 19:18:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.519 19:18:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.519 19:18:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7509952 kB' 'MemAvailable: 9506048 kB' 'Buffers: 2436 kB' 'Cached: 2207924 kB' 'SwapCached: 0 kB' 'Active: 857724 kB' 'Inactive: 1473500 kB' 'Active(anon): 131336 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473500 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1268 kB' 'Writeback: 0 kB' 'AnonPages: 122408 kB' 'Mapped: 48220 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 143320 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 77024 kB' 'KernelStack: 6316 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 344424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.519 19:18:59 -- setup/common.sh@33 -- # echo 0 00:04:33.519 19:18:59 -- setup/common.sh@33 -- # return 0 00:04:33.519 19:18:59 -- setup/hugepages.sh@97 -- # anon=0 00:04:33.519 19:18:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.519 19:18:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.519 19:18:59 -- setup/common.sh@18 -- # local node= 00:04:33.519 19:18:59 -- setup/common.sh@19 -- # local var val 00:04:33.519 19:18:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.519 19:18:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.519 19:18:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.519 19:18:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.519 19:18:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.519 19:18:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.519 19:18:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7509952 kB' 'MemAvailable: 9506048 kB' 'Buffers: 2436 kB' 'Cached: 2207924 kB' 'SwapCached: 0 kB' 'Active: 857284 kB' 'Inactive: 1473500 kB' 'Active(anon): 130896 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473500 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1268 kB' 'Writeback: 0 kB' 'AnonPages: 122004 kB' 'Mapped: 48136 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 143308 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 77012 kB' 'KernelStack: 6360 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 344424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.519 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.519 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.780 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.780 19:18:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.780 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.780 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.780 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.780 19:18:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.781 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.781 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.782 19:18:59 -- setup/common.sh@33 -- # echo 0 00:04:33.782 19:18:59 -- setup/common.sh@33 -- # return 0 00:04:33.782 19:18:59 -- setup/hugepages.sh@99 -- # surp=0 00:04:33.782 19:18:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.782 19:18:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.782 19:18:59 -- setup/common.sh@18 -- # local node= 00:04:33.782 19:18:59 -- setup/common.sh@19 -- # local var val 00:04:33.782 19:18:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.782 19:18:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.782 19:18:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.782 19:18:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.782 19:18:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.782 19:18:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7509952 kB' 'MemAvailable: 9506048 kB' 'Buffers: 2436 kB' 'Cached: 2207924 kB' 'SwapCached: 0 kB' 'Active: 857256 kB' 'Inactive: 1473500 kB' 'Active(anon): 130868 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473500 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1268 kB' 'Writeback: 0 kB' 'AnonPages: 122232 kB' 'Mapped: 48136 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 143308 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 77012 kB' 'KernelStack: 6360 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 344424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.782 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.782 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.783 19:18:59 -- setup/common.sh@33 -- # echo 0 00:04:33.783 19:18:59 -- setup/common.sh@33 -- # return 0 00:04:33.783 19:18:59 -- setup/hugepages.sh@100 -- # resv=0 00:04:33.783 19:18:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:33.783 nr_hugepages=1025 00:04:33.783 19:18:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.783 resv_hugepages=0 00:04:33.783 19:18:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.783 surplus_hugepages=0 00:04:33.783 19:18:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.783 anon_hugepages=0 00:04:33.783 19:18:59 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:33.783 19:18:59 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:33.783 19:18:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.783 19:18:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.783 19:18:59 -- setup/common.sh@18 -- # local node= 00:04:33.783 19:18:59 -- setup/common.sh@19 -- # local var val 00:04:33.783 19:18:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.783 19:18:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.783 19:18:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.783 19:18:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.783 19:18:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.783 19:18:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7509952 kB' 'MemAvailable: 9506048 kB' 'Buffers: 2436 kB' 'Cached: 2207924 kB' 'SwapCached: 0 kB' 'Active: 857496 kB' 'Inactive: 1473500 kB' 'Active(anon): 131108 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473500 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1268 kB' 'Writeback: 0 kB' 'AnonPages: 122216 kB' 'Mapped: 48136 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 143308 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 77012 kB' 'KernelStack: 6360 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 344424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.783 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.783 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.784 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.784 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.785 19:18:59 -- setup/common.sh@33 -- # echo 1025 00:04:33.785 19:18:59 -- setup/common.sh@33 -- # return 0 00:04:33.785 19:18:59 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:33.785 19:18:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.785 19:18:59 -- setup/hugepages.sh@27 -- # local node 00:04:33.785 19:18:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.785 19:18:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:33.785 19:18:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:33.785 19:18:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.785 19:18:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.785 19:18:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.785 19:18:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.785 19:18:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.785 19:18:59 -- setup/common.sh@18 -- # local node=0 00:04:33.785 19:18:59 -- setup/common.sh@19 -- # local var val 00:04:33.785 19:18:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:33.785 19:18:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.785 19:18:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.785 19:18:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.785 19:18:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.785 19:18:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7509952 kB' 'MemUsed: 4732028 kB' 'SwapCached: 0 kB' 'Active: 857500 kB' 'Inactive: 1473500 kB' 'Active(anon): 131112 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473500 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1268 kB' 'Writeback: 0 kB' 'FilePages: 2210360 kB' 'Mapped: 48136 kB' 'AnonPages: 122216 kB' 'Shmem: 10472 kB' 'KernelStack: 6360 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66296 kB' 'Slab: 143308 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 77012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.785 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.785 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # continue 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:33.786 19:18:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:33.786 19:18:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.786 19:18:59 -- setup/common.sh@33 -- # echo 0 00:04:33.786 19:18:59 -- setup/common.sh@33 -- # return 0 00:04:33.786 19:18:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.786 19:18:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.786 19:18:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.786 19:18:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.786 19:18:59 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:33.786 node0=1025 expecting 1025 00:04:33.786 19:18:59 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:33.786 00:04:33.786 real 0m0.897s 00:04:33.786 user 0m0.370s 00:04:33.786 sys 0m0.563s 00:04:33.786 19:18:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:33.786 19:18:59 -- common/autotest_common.sh@10 -- # set +x 00:04:33.786 ************************************ 00:04:33.786 END TEST odd_alloc 00:04:33.786 ************************************ 00:04:33.786 19:18:59 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:33.786 19:18:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.786 19:18:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.786 19:18:59 -- common/autotest_common.sh@10 -- # set +x 00:04:34.045 ************************************ 00:04:34.045 START TEST custom_alloc 00:04:34.045 ************************************ 00:04:34.045 19:18:59 -- common/autotest_common.sh@1111 -- # custom_alloc 00:04:34.045 19:18:59 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:34.045 19:18:59 -- setup/hugepages.sh@169 -- # local node 00:04:34.045 19:18:59 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:34.045 19:18:59 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:34.045 19:18:59 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:34.045 19:18:59 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:34.045 19:18:59 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:34.045 19:18:59 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:34.045 19:18:59 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.045 19:18:59 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:34.045 19:18:59 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:34.045 19:18:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.045 19:18:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.045 19:18:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:34.045 19:18:59 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:34.045 19:18:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.045 19:18:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.045 19:18:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.045 19:18:59 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:34.045 19:18:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.045 19:18:59 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:34.045 19:18:59 -- setup/hugepages.sh@83 -- # : 0 00:04:34.045 19:18:59 -- setup/hugepages.sh@84 -- # : 0 00:04:34.045 19:18:59 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.045 19:18:59 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:34.046 19:18:59 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:34.046 19:18:59 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:34.046 19:18:59 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:34.046 19:18:59 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:34.046 19:18:59 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:34.046 19:18:59 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.046 19:18:59 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.046 19:18:59 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:34.046 19:18:59 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:34.046 19:18:59 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.046 19:18:59 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.046 19:18:59 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.046 19:18:59 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:34.046 19:18:59 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:34.046 19:18:59 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:34.046 19:18:59 -- setup/hugepages.sh@78 -- # return 0 00:04:34.046 19:18:59 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:34.046 19:18:59 -- setup/hugepages.sh@187 -- # setup output 00:04:34.046 19:18:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.046 19:18:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:34.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:34.569 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.569 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.569 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.569 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:34.569 19:19:00 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:34.569 19:19:00 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:34.569 19:19:00 -- setup/hugepages.sh@89 -- # local node 00:04:34.569 19:19:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.569 19:19:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.569 19:19:00 -- setup/hugepages.sh@92 -- # local surp 00:04:34.569 19:19:00 -- setup/hugepages.sh@93 -- # local resv 00:04:34.569 19:19:00 -- setup/hugepages.sh@94 -- # local anon 00:04:34.569 19:19:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.569 19:19:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.569 19:19:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.569 19:19:00 -- setup/common.sh@18 -- # local node= 00:04:34.569 19:19:00 -- setup/common.sh@19 -- # local var val 00:04:34.569 19:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.569 19:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.569 19:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.569 19:19:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.569 19:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.569 19:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8558972 kB' 'MemAvailable: 10555072 kB' 'Buffers: 2436 kB' 'Cached: 2207928 kB' 'SwapCached: 0 kB' 'Active: 858084 kB' 'Inactive: 1473504 kB' 'Active(anon): 131696 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473504 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 652 kB' 'Writeback: 0 kB' 'AnonPages: 123048 kB' 'Mapped: 48104 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 143188 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76892 kB' 'KernelStack: 6424 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 344424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.569 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.569 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.570 19:19:00 -- setup/common.sh@33 -- # echo 0 00:04:34.570 19:19:00 -- setup/common.sh@33 -- # return 0 00:04:34.570 19:19:00 -- setup/hugepages.sh@97 -- # anon=0 00:04:34.570 19:19:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.570 19:19:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.570 19:19:00 -- setup/common.sh@18 -- # local node= 00:04:34.570 19:19:00 -- setup/common.sh@19 -- # local var val 00:04:34.570 19:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.570 19:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.570 19:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.570 19:19:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.570 19:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.570 19:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8558972 kB' 'MemAvailable: 10555072 kB' 'Buffers: 2436 kB' 'Cached: 2207928 kB' 'SwapCached: 0 kB' 'Active: 857412 kB' 'Inactive: 1473504 kB' 'Active(anon): 131024 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473504 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 652 kB' 'Writeback: 0 kB' 'AnonPages: 122520 kB' 'Mapped: 48296 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 143140 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76844 kB' 'KernelStack: 6404 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 347160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.570 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.570 19:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.571 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.571 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.572 19:19:00 -- setup/common.sh@33 -- # echo 0 00:04:34.572 19:19:00 -- setup/common.sh@33 -- # return 0 00:04:34.572 19:19:00 -- setup/hugepages.sh@99 -- # surp=0 00:04:34.572 19:19:00 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.572 19:19:00 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.572 19:19:00 -- setup/common.sh@18 -- # local node= 00:04:34.572 19:19:00 -- setup/common.sh@19 -- # local var val 00:04:34.572 19:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.572 19:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.572 19:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.572 19:19:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.572 19:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.572 19:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.572 19:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8558972 kB' 'MemAvailable: 10555072 kB' 'Buffers: 2436 kB' 'Cached: 2207928 kB' 'SwapCached: 0 kB' 'Active: 857324 kB' 'Inactive: 1473504 kB' 'Active(anon): 130936 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473504 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 652 kB' 'Writeback: 0 kB' 'AnonPages: 122436 kB' 'Mapped: 48096 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 143140 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76844 kB' 'KernelStack: 6372 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 344424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.572 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.572 19:19:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.573 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.573 19:19:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.573 19:19:00 -- setup/common.sh@33 -- # echo 0 00:04:34.573 19:19:00 -- setup/common.sh@33 -- # return 0 00:04:34.573 nr_hugepages=512 00:04:34.573 resv_hugepages=0 00:04:34.573 surplus_hugepages=0 00:04:34.573 anon_hugepages=0 00:04:34.573 19:19:00 -- setup/hugepages.sh@100 -- # resv=0 00:04:34.573 19:19:00 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:34.573 19:19:00 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.573 19:19:00 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.573 19:19:00 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.573 19:19:00 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:34.574 19:19:00 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:34.574 19:19:00 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.574 19:19:00 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.574 19:19:00 -- setup/common.sh@18 -- # local node= 00:04:34.574 19:19:00 -- setup/common.sh@19 -- # local var val 00:04:34.574 19:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.574 19:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.574 19:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.574 19:19:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.574 19:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.574 19:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8558972 kB' 'MemAvailable: 10555072 kB' 'Buffers: 2436 kB' 'Cached: 2207928 kB' 'SwapCached: 0 kB' 'Active: 857352 kB' 'Inactive: 1473504 kB' 'Active(anon): 130964 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473504 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 652 kB' 'Writeback: 0 kB' 'AnonPages: 122180 kB' 'Mapped: 48036 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 143140 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76844 kB' 'KernelStack: 6336 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 344424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.574 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.574 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.575 19:19:00 -- setup/common.sh@33 -- # echo 512 00:04:34.575 19:19:00 -- setup/common.sh@33 -- # return 0 00:04:34.575 19:19:00 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:34.575 19:19:00 -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.575 19:19:00 -- setup/hugepages.sh@27 -- # local node 00:04:34.575 19:19:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.575 19:19:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.575 19:19:00 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:34.575 19:19:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.575 19:19:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.575 19:19:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.575 19:19:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.575 19:19:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.575 19:19:00 -- setup/common.sh@18 -- # local node=0 00:04:34.575 19:19:00 -- setup/common.sh@19 -- # local var val 00:04:34.575 19:19:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.575 19:19:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.575 19:19:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.575 19:19:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.575 19:19:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.575 19:19:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.575 19:19:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8558972 kB' 'MemUsed: 3683008 kB' 'SwapCached: 0 kB' 'Active: 857264 kB' 'Inactive: 1473504 kB' 'Active(anon): 130876 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473504 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 652 kB' 'Writeback: 0 kB' 'FilePages: 2210364 kB' 'Mapped: 48036 kB' 'AnonPages: 122288 kB' 'Shmem: 10472 kB' 'KernelStack: 6336 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66296 kB' 'Slab: 143140 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.575 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.575 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.576 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.576 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # continue 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.835 19:19:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.835 19:19:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.835 19:19:00 -- setup/common.sh@33 -- # echo 0 00:04:34.835 19:19:00 -- setup/common.sh@33 -- # return 0 00:04:34.835 19:19:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.835 19:19:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.835 19:19:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.835 19:19:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.835 node0=512 expecting 512 00:04:34.835 19:19:00 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:34.835 19:19:00 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:34.835 00:04:34.835 real 0m0.796s 00:04:34.835 user 0m0.367s 00:04:34.835 sys 0m0.463s 00:04:34.835 19:19:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:34.835 ************************************ 00:04:34.835 END TEST custom_alloc 00:04:34.835 ************************************ 00:04:34.835 19:19:00 -- common/autotest_common.sh@10 -- # set +x 00:04:34.835 19:19:00 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:34.835 19:19:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.835 19:19:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.835 19:19:00 -- common/autotest_common.sh@10 -- # set +x 00:04:34.835 ************************************ 00:04:34.835 START TEST no_shrink_alloc 00:04:34.835 ************************************ 00:04:34.835 19:19:00 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:04:34.835 19:19:00 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:34.835 19:19:00 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:34.835 19:19:00 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:34.835 19:19:00 -- setup/hugepages.sh@51 -- # shift 00:04:34.835 19:19:00 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:34.835 19:19:00 -- setup/hugepages.sh@52 -- # local node_ids 00:04:34.835 19:19:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.835 19:19:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:34.835 19:19:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:34.835 19:19:00 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:34.835 19:19:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.835 19:19:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:34.835 19:19:00 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:34.835 19:19:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.835 19:19:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.835 19:19:00 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:34.835 19:19:00 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.835 19:19:00 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:34.835 19:19:00 -- setup/hugepages.sh@73 -- # return 0 00:04:34.835 19:19:00 -- setup/hugepages.sh@198 -- # setup output 00:04:34.835 19:19:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.835 19:19:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:35.428 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.728 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.728 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.728 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.728 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:35.728 19:19:01 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:35.728 19:19:01 -- setup/hugepages.sh@89 -- # local node 00:04:35.729 19:19:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.729 19:19:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.729 19:19:01 -- setup/hugepages.sh@92 -- # local surp 00:04:35.729 19:19:01 -- setup/hugepages.sh@93 -- # local resv 00:04:35.729 19:19:01 -- setup/hugepages.sh@94 -- # local anon 00:04:35.729 19:19:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.729 19:19:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.729 19:19:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.729 19:19:01 -- setup/common.sh@18 -- # local node= 00:04:35.729 19:19:01 -- setup/common.sh@19 -- # local var val 00:04:35.729 19:19:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.729 19:19:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.729 19:19:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.729 19:19:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.729 19:19:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.729 19:19:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7513100 kB' 'MemAvailable: 9509208 kB' 'Buffers: 2436 kB' 'Cached: 2207936 kB' 'SwapCached: 0 kB' 'Active: 857756 kB' 'Inactive: 1473512 kB' 'Active(anon): 131368 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473512 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'AnonPages: 122572 kB' 'Mapped: 48192 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 143188 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76892 kB' 'KernelStack: 6352 kB' 'PageTables: 4112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.729 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.729 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.730 19:19:01 -- setup/common.sh@33 -- # echo 0 00:04:35.730 19:19:01 -- setup/common.sh@33 -- # return 0 00:04:35.730 19:19:01 -- setup/hugepages.sh@97 -- # anon=0 00:04:35.730 19:19:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.730 19:19:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.730 19:19:01 -- setup/common.sh@18 -- # local node= 00:04:35.730 19:19:01 -- setup/common.sh@19 -- # local var val 00:04:35.730 19:19:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.730 19:19:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.730 19:19:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.730 19:19:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.730 19:19:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.730 19:19:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.730 19:19:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7513100 kB' 'MemAvailable: 9509212 kB' 'Buffers: 2436 kB' 'Cached: 2207940 kB' 'SwapCached: 0 kB' 'Active: 857648 kB' 'Inactive: 1473516 kB' 'Active(anon): 131260 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'AnonPages: 122536 kB' 'Mapped: 48060 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 143184 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76888 kB' 'KernelStack: 6352 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54904 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.730 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.730 19:19:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.731 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.731 19:19:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.731 19:19:01 -- setup/common.sh@33 -- # echo 0 00:04:35.731 19:19:01 -- setup/common.sh@33 -- # return 0 00:04:35.731 19:19:01 -- setup/hugepages.sh@99 -- # surp=0 00:04:35.732 19:19:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.732 19:19:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.732 19:19:01 -- setup/common.sh@18 -- # local node= 00:04:35.732 19:19:01 -- setup/common.sh@19 -- # local var val 00:04:35.732 19:19:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.732 19:19:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.732 19:19:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.732 19:19:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.732 19:19:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.732 19:19:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7512848 kB' 'MemAvailable: 9508960 kB' 'Buffers: 2436 kB' 'Cached: 2207940 kB' 'SwapCached: 0 kB' 'Active: 857404 kB' 'Inactive: 1473516 kB' 'Active(anon): 131016 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'AnonPages: 122252 kB' 'Mapped: 48060 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 143184 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76888 kB' 'KernelStack: 6320 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54904 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.732 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.732 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.733 19:19:01 -- setup/common.sh@33 -- # echo 0 00:04:35.733 19:19:01 -- setup/common.sh@33 -- # return 0 00:04:35.733 19:19:01 -- setup/hugepages.sh@100 -- # resv=0 00:04:35.733 19:19:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.733 nr_hugepages=1024 00:04:35.733 19:19:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.733 resv_hugepages=0 00:04:35.733 19:19:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.733 surplus_hugepages=0 00:04:35.733 19:19:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.733 anon_hugepages=0 00:04:35.733 19:19:01 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.733 19:19:01 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.733 19:19:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.733 19:19:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.733 19:19:01 -- setup/common.sh@18 -- # local node= 00:04:35.733 19:19:01 -- setup/common.sh@19 -- # local var val 00:04:35.733 19:19:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.733 19:19:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.733 19:19:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.733 19:19:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.733 19:19:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.733 19:19:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.733 19:19:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7512848 kB' 'MemAvailable: 9508960 kB' 'Buffers: 2436 kB' 'Cached: 2207940 kB' 'SwapCached: 0 kB' 'Active: 857400 kB' 'Inactive: 1473516 kB' 'Active(anon): 131012 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'AnonPages: 122252 kB' 'Mapped: 48060 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 143184 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76888 kB' 'KernelStack: 6320 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54920 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.733 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.733 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.734 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.734 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.735 19:19:01 -- setup/common.sh@33 -- # echo 1024 00:04:35.735 19:19:01 -- setup/common.sh@33 -- # return 0 00:04:35.735 19:19:01 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.735 19:19:01 -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.735 19:19:01 -- setup/hugepages.sh@27 -- # local node 00:04:35.735 19:19:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.735 19:19:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:35.735 19:19:01 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:35.735 19:19:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.735 19:19:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.735 19:19:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.735 19:19:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.735 19:19:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.735 19:19:01 -- setup/common.sh@18 -- # local node=0 00:04:35.735 19:19:01 -- setup/common.sh@19 -- # local var val 00:04:35.735 19:19:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.735 19:19:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.735 19:19:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.735 19:19:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.735 19:19:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.735 19:19:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.735 19:19:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7512848 kB' 'MemUsed: 4729132 kB' 'SwapCached: 0 kB' 'Active: 857404 kB' 'Inactive: 1473516 kB' 'Active(anon): 131016 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'FilePages: 2210376 kB' 'Mapped: 48060 kB' 'AnonPages: 122252 kB' 'Shmem: 10472 kB' 'KernelStack: 6320 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66296 kB' 'Slab: 143184 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.735 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.735 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # continue 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.736 19:19:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.736 19:19:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.736 19:19:01 -- setup/common.sh@33 -- # echo 0 00:04:35.736 19:19:01 -- setup/common.sh@33 -- # return 0 00:04:35.736 19:19:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.736 node0=1024 expecting 1024 00:04:35.736 19:19:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.736 19:19:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.736 19:19:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.736 19:19:01 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:35.736 19:19:01 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:35.736 19:19:01 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:35.736 19:19:01 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:35.736 19:19:01 -- setup/hugepages.sh@202 -- # setup output 00:04:35.736 19:19:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.736 19:19:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:36.304 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:36.304 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.304 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.304 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.304 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:36.567 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:36.567 19:19:02 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:36.568 19:19:02 -- setup/hugepages.sh@89 -- # local node 00:04:36.568 19:19:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.568 19:19:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.568 19:19:02 -- setup/hugepages.sh@92 -- # local surp 00:04:36.568 19:19:02 -- setup/hugepages.sh@93 -- # local resv 00:04:36.568 19:19:02 -- setup/hugepages.sh@94 -- # local anon 00:04:36.568 19:19:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:36.568 19:19:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.568 19:19:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.568 19:19:02 -- setup/common.sh@18 -- # local node= 00:04:36.568 19:19:02 -- setup/common.sh@19 -- # local var val 00:04:36.568 19:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.568 19:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.568 19:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.568 19:19:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.568 19:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.568 19:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7517984 kB' 'MemAvailable: 9514096 kB' 'Buffers: 2436 kB' 'Cached: 2207940 kB' 'SwapCached: 0 kB' 'Active: 858360 kB' 'Inactive: 1473516 kB' 'Active(anon): 131972 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 816 kB' 'Writeback: 0 kB' 'AnonPages: 123048 kB' 'Mapped: 48320 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 142924 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76628 kB' 'KernelStack: 6408 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55016 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.568 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.568 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.569 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.569 19:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.570 19:19:02 -- setup/common.sh@33 -- # echo 0 00:04:36.570 19:19:02 -- setup/common.sh@33 -- # return 0 00:04:36.570 19:19:02 -- setup/hugepages.sh@97 -- # anon=0 00:04:36.570 19:19:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.570 19:19:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.570 19:19:02 -- setup/common.sh@18 -- # local node= 00:04:36.570 19:19:02 -- setup/common.sh@19 -- # local var val 00:04:36.570 19:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.570 19:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.570 19:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.570 19:19:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.570 19:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.570 19:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7518048 kB' 'MemAvailable: 9514160 kB' 'Buffers: 2436 kB' 'Cached: 2207940 kB' 'SwapCached: 0 kB' 'Active: 857612 kB' 'Inactive: 1473516 kB' 'Active(anon): 131224 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 816 kB' 'Writeback: 0 kB' 'AnonPages: 122296 kB' 'Mapped: 48060 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 142924 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76628 kB' 'KernelStack: 6320 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54968 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.570 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.570 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.571 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.571 19:19:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.572 19:19:02 -- setup/common.sh@33 -- # echo 0 00:04:36.572 19:19:02 -- setup/common.sh@33 -- # return 0 00:04:36.572 19:19:02 -- setup/hugepages.sh@99 -- # surp=0 00:04:36.572 19:19:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.572 19:19:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.572 19:19:02 -- setup/common.sh@18 -- # local node= 00:04:36.572 19:19:02 -- setup/common.sh@19 -- # local var val 00:04:36.572 19:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.572 19:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.572 19:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.572 19:19:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.572 19:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.572 19:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 19:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7518048 kB' 'MemAvailable: 9514160 kB' 'Buffers: 2436 kB' 'Cached: 2207940 kB' 'SwapCached: 0 kB' 'Active: 857612 kB' 'Inactive: 1473516 kB' 'Active(anon): 131224 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 816 kB' 'Writeback: 0 kB' 'AnonPages: 122296 kB' 'Mapped: 48060 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 142924 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76628 kB' 'KernelStack: 6320 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 344424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54952 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.572 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.572 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.573 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.573 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.574 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.574 19:19:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.575 19:19:02 -- setup/common.sh@33 -- # echo 0 00:04:36.575 19:19:02 -- setup/common.sh@33 -- # return 0 00:04:36.575 19:19:02 -- setup/hugepages.sh@100 -- # resv=0 00:04:36.575 19:19:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:36.575 nr_hugepages=1024 00:04:36.575 19:19:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.575 resv_hugepages=0 00:04:36.575 19:19:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.575 surplus_hugepages=0 00:04:36.575 19:19:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:36.575 anon_hugepages=0 00:04:36.575 19:19:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.575 19:19:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:36.575 19:19:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.575 19:19:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.575 19:19:02 -- setup/common.sh@18 -- # local node= 00:04:36.575 19:19:02 -- setup/common.sh@19 -- # local var val 00:04:36.575 19:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.575 19:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.575 19:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.575 19:19:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.575 19:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.575 19:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.575 19:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7518696 kB' 'MemAvailable: 9514808 kB' 'Buffers: 2436 kB' 'Cached: 2207940 kB' 'SwapCached: 0 kB' 'Active: 857948 kB' 'Inactive: 1473516 kB' 'Active(anon): 131560 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 816 kB' 'Writeback: 0 kB' 'AnonPages: 122680 kB' 'Mapped: 48060 kB' 'Shmem: 10472 kB' 'KReclaimable: 66296 kB' 'Slab: 142924 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76628 kB' 'KernelStack: 6336 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 347024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54936 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 163692 kB' 'DirectMap2M: 5079040 kB' 'DirectMap1G: 9437184 kB' 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.575 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.575 19:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.576 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.576 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.577 19:19:02 -- setup/common.sh@33 -- # echo 1024 00:04:36.577 19:19:02 -- setup/common.sh@33 -- # return 0 00:04:36.577 19:19:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.577 19:19:02 -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.577 19:19:02 -- setup/hugepages.sh@27 -- # local node 00:04:36.577 19:19:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.577 19:19:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:36.577 19:19:02 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:36.577 19:19:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.577 19:19:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.577 19:19:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.577 19:19:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.577 19:19:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.577 19:19:02 -- setup/common.sh@18 -- # local node=0 00:04:36.577 19:19:02 -- setup/common.sh@19 -- # local var val 00:04:36.577 19:19:02 -- setup/common.sh@20 -- # local mem_f mem 00:04:36.577 19:19:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.577 19:19:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.577 19:19:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.577 19:19:02 -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.577 19:19:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7518696 kB' 'MemUsed: 4723284 kB' 'SwapCached: 0 kB' 'Active: 857812 kB' 'Inactive: 1473516 kB' 'Active(anon): 131424 kB' 'Inactive(anon): 0 kB' 'Active(file): 726388 kB' 'Inactive(file): 1473516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 816 kB' 'Writeback: 0 kB' 'FilePages: 2210376 kB' 'Mapped: 48120 kB' 'AnonPages: 122560 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66296 kB' 'Slab: 142924 kB' 'SReclaimable: 66296 kB' 'SUnreclaim: 76628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.577 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.577 19:19:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # continue 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # IFS=': ' 00:04:36.578 19:19:02 -- setup/common.sh@31 -- # read -r var val _ 00:04:36.578 19:19:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.578 19:19:02 -- setup/common.sh@33 -- # echo 0 00:04:36.578 19:19:02 -- setup/common.sh@33 -- # return 0 00:04:36.578 19:19:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.578 19:19:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.578 19:19:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.578 19:19:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.578 node0=1024 expecting 1024 00:04:36.578 19:19:02 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:36.578 19:19:02 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:36.578 00:04:36.578 real 0m1.820s 00:04:36.578 user 0m0.814s 00:04:36.578 sys 0m1.088s 00:04:36.578 19:19:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.578 19:19:02 -- common/autotest_common.sh@10 -- # set +x 00:04:36.578 ************************************ 00:04:36.578 END TEST no_shrink_alloc 00:04:36.578 ************************************ 00:04:36.837 19:19:02 -- setup/hugepages.sh@217 -- # clear_hp 00:04:36.837 19:19:02 -- setup/hugepages.sh@37 -- # local node hp 00:04:36.837 19:19:02 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:36.837 19:19:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.837 19:19:02 -- setup/hugepages.sh@41 -- # echo 0 00:04:36.837 19:19:02 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:36.837 19:19:02 -- setup/hugepages.sh@41 -- # echo 0 00:04:36.837 19:19:02 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:36.837 19:19:02 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:36.837 ************************************ 00:04:36.838 END TEST hugepages 00:04:36.838 ************************************ 00:04:36.838 00:04:36.838 real 0m7.856s 00:04:36.838 user 0m3.336s 00:04:36.838 sys 0m4.676s 00:04:36.838 19:19:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.838 19:19:02 -- common/autotest_common.sh@10 -- # set +x 00:04:36.838 19:19:02 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:36.838 19:19:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.838 19:19:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.838 19:19:02 -- common/autotest_common.sh@10 -- # set +x 00:04:36.838 ************************************ 00:04:36.838 START TEST driver 00:04:36.838 ************************************ 00:04:36.838 19:19:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:37.097 * Looking for test storage... 00:04:37.097 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:37.097 19:19:02 -- setup/driver.sh@68 -- # setup reset 00:04:37.097 19:19:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.097 19:19:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:43.668 19:19:08 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:43.668 19:19:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:43.668 19:19:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:43.668 19:19:08 -- common/autotest_common.sh@10 -- # set +x 00:04:43.668 ************************************ 00:04:43.668 START TEST guess_driver 00:04:43.668 ************************************ 00:04:43.668 19:19:08 -- common/autotest_common.sh@1111 -- # guess_driver 00:04:43.668 19:19:08 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:43.668 19:19:08 -- setup/driver.sh@47 -- # local fail=0 00:04:43.668 19:19:08 -- setup/driver.sh@49 -- # pick_driver 00:04:43.668 19:19:08 -- setup/driver.sh@36 -- # vfio 00:04:43.668 19:19:08 -- setup/driver.sh@21 -- # local iommu_grups 00:04:43.668 19:19:08 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:43.668 19:19:08 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:43.668 19:19:08 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:43.668 19:19:08 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:43.668 19:19:08 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:43.668 19:19:08 -- setup/driver.sh@32 -- # return 1 00:04:43.668 19:19:08 -- setup/driver.sh@38 -- # uio 00:04:43.668 19:19:08 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:43.668 19:19:08 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:43.668 19:19:08 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:43.668 19:19:08 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:43.668 19:19:08 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:43.668 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:43.668 19:19:08 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:43.668 19:19:08 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:43.668 19:19:08 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:43.668 Looking for driver=uio_pci_generic 00:04:43.668 19:19:08 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:43.668 19:19:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.668 19:19:08 -- setup/driver.sh@45 -- # setup output config 00:04:43.668 19:19:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.668 19:19:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:43.668 19:19:09 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:43.668 19:19:09 -- setup/driver.sh@58 -- # continue 00:04:43.668 19:19:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.607 19:19:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.607 19:19:09 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:44.607 19:19:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.607 19:19:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.607 19:19:09 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:44.607 19:19:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.607 19:19:09 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.607 19:19:09 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:44.607 19:19:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.607 19:19:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.607 19:19:10 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:44.607 19:19:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.607 19:19:10 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:44.607 19:19:10 -- setup/driver.sh@65 -- # setup reset 00:04:44.607 19:19:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.607 19:19:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.175 00:04:51.175 real 0m7.533s 00:04:51.175 user 0m0.866s 00:04:51.175 sys 0m1.829s 00:04:51.175 19:19:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.175 19:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:51.175 ************************************ 00:04:51.175 END TEST guess_driver 00:04:51.175 ************************************ 00:04:51.175 ************************************ 00:04:51.175 END TEST driver 00:04:51.175 ************************************ 00:04:51.175 00:04:51.175 real 0m13.825s 00:04:51.175 user 0m1.312s 00:04:51.175 sys 0m2.851s 00:04:51.175 19:19:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:51.175 19:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:51.175 19:19:16 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:51.175 19:19:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.175 19:19:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.175 19:19:16 -- common/autotest_common.sh@10 -- # set +x 00:04:51.175 ************************************ 00:04:51.175 START TEST devices 00:04:51.175 ************************************ 00:04:51.175 19:19:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:51.175 * Looking for test storage... 00:04:51.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:51.175 19:19:16 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:51.175 19:19:16 -- setup/devices.sh@192 -- # setup reset 00:04:51.175 19:19:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:51.175 19:19:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:52.111 19:19:17 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:52.111 19:19:17 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:52.111 19:19:17 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:52.111 19:19:17 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:52.111 19:19:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:52.111 19:19:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:52.111 19:19:17 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:52.111 19:19:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:52.111 19:19:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:52.111 19:19:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:52.111 19:19:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:52.111 19:19:17 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:52.111 19:19:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:52.111 19:19:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:52.111 19:19:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:52.111 19:19:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:52.111 19:19:17 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:52.111 19:19:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:52.111 19:19:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:52.111 19:19:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:52.111 19:19:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:04:52.111 19:19:17 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:04:52.111 19:19:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:52.111 19:19:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:52.111 19:19:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:52.111 19:19:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:04:52.111 19:19:17 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:04:52.111 19:19:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:52.111 19:19:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:52.111 19:19:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:52.111 19:19:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:52.111 19:19:17 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:52.111 19:19:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:52.111 19:19:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:52.111 19:19:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:52.111 19:19:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:52.111 19:19:17 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:52.111 19:19:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:52.111 19:19:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:52.111 19:19:17 -- setup/devices.sh@196 -- # blocks=() 00:04:52.111 19:19:17 -- setup/devices.sh@196 -- # declare -a blocks 00:04:52.111 19:19:17 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:52.111 19:19:17 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:52.111 19:19:17 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:52.111 19:19:17 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.111 19:19:17 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:52.111 19:19:17 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:52.111 19:19:17 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:52.111 19:19:17 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:52.111 19:19:17 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:52.111 19:19:17 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:52.111 19:19:17 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:52.371 No valid GPT data, bailing 00:04:52.371 19:19:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:52.371 19:19:17 -- scripts/common.sh@391 -- # pt= 00:04:52.371 19:19:17 -- scripts/common.sh@392 -- # return 1 00:04:52.371 19:19:17 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:52.371 19:19:17 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:52.371 19:19:17 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:52.371 19:19:17 -- setup/common.sh@80 -- # echo 5368709120 00:04:52.371 19:19:17 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:52.371 19:19:17 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.371 19:19:17 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:52.371 19:19:17 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.371 19:19:17 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:52.371 19:19:17 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:52.371 19:19:17 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:52.371 19:19:17 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:52.371 19:19:17 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:52.371 19:19:17 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:52.371 19:19:17 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:52.371 No valid GPT data, bailing 00:04:52.371 19:19:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:52.371 19:19:17 -- scripts/common.sh@391 -- # pt= 00:04:52.371 19:19:17 -- scripts/common.sh@392 -- # return 1 00:04:52.371 19:19:17 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:52.371 19:19:17 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:52.371 19:19:17 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:52.371 19:19:17 -- setup/common.sh@80 -- # echo 6343335936 00:04:52.371 19:19:17 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:52.371 19:19:17 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.371 19:19:17 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:52.371 19:19:17 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.371 19:19:17 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:52.371 19:19:17 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:52.371 19:19:17 -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:52.371 19:19:17 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:52.371 19:19:17 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:52.371 19:19:17 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:04:52.371 19:19:17 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:52.371 No valid GPT data, bailing 00:04:52.371 19:19:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:52.371 19:19:18 -- scripts/common.sh@391 -- # pt= 00:04:52.371 19:19:18 -- scripts/common.sh@392 -- # return 1 00:04:52.371 19:19:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:52.371 19:19:18 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:52.371 19:19:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:52.371 19:19:18 -- setup/common.sh@80 -- # echo 4294967296 00:04:52.371 19:19:18 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:52.371 19:19:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.371 19:19:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:52.371 19:19:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.371 19:19:18 -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:04:52.371 19:19:18 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:52.371 19:19:18 -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:52.371 19:19:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:52.371 19:19:18 -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:04:52.371 19:19:18 -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:04:52.371 19:19:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:04:52.630 No valid GPT data, bailing 00:04:52.630 19:19:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:52.630 19:19:18 -- scripts/common.sh@391 -- # pt= 00:04:52.630 19:19:18 -- scripts/common.sh@392 -- # return 1 00:04:52.630 19:19:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:04:52.630 19:19:18 -- setup/common.sh@76 -- # local dev=nvme2n2 00:04:52.630 19:19:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:04:52.630 19:19:18 -- setup/common.sh@80 -- # echo 4294967296 00:04:52.630 19:19:18 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:52.630 19:19:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.630 19:19:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:52.631 19:19:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.631 19:19:18 -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:04:52.631 19:19:18 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:52.631 19:19:18 -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:52.631 19:19:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:52.631 19:19:18 -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:04:52.631 19:19:18 -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:04:52.631 19:19:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:04:52.631 No valid GPT data, bailing 00:04:52.631 19:19:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:52.631 19:19:18 -- scripts/common.sh@391 -- # pt= 00:04:52.631 19:19:18 -- scripts/common.sh@392 -- # return 1 00:04:52.631 19:19:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:04:52.631 19:19:18 -- setup/common.sh@76 -- # local dev=nvme2n3 00:04:52.631 19:19:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:04:52.631 19:19:18 -- setup/common.sh@80 -- # echo 4294967296 00:04:52.631 19:19:18 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:52.631 19:19:18 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:52.631 19:19:18 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:52.631 19:19:18 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:52.631 19:19:18 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:52.631 19:19:18 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:52.631 19:19:18 -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:04:52.631 19:19:18 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:52.631 19:19:18 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:52.631 19:19:18 -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:04:52.631 19:19:18 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:52.631 No valid GPT data, bailing 00:04:52.631 19:19:18 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:52.631 19:19:18 -- scripts/common.sh@391 -- # pt= 00:04:52.631 19:19:18 -- scripts/common.sh@392 -- # return 1 00:04:52.631 19:19:18 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:52.631 19:19:18 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:52.631 19:19:18 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:52.631 19:19:18 -- setup/common.sh@80 -- # echo 1073741824 00:04:52.631 19:19:18 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:52.631 19:19:18 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:52.631 19:19:18 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:52.631 19:19:18 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:52.631 19:19:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.631 19:19:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.631 19:19:18 -- common/autotest_common.sh@10 -- # set +x 00:04:52.890 ************************************ 00:04:52.890 START TEST nvme_mount 00:04:52.890 ************************************ 00:04:52.890 19:19:18 -- common/autotest_common.sh@1111 -- # nvme_mount 00:04:52.890 19:19:18 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:52.890 19:19:18 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:52.890 19:19:18 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:52.890 19:19:18 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:52.890 19:19:18 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:52.890 19:19:18 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:52.890 19:19:18 -- setup/common.sh@40 -- # local part_no=1 00:04:52.890 19:19:18 -- setup/common.sh@41 -- # local size=1073741824 00:04:52.890 19:19:18 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:52.890 19:19:18 -- setup/common.sh@44 -- # parts=() 00:04:52.890 19:19:18 -- setup/common.sh@44 -- # local parts 00:04:52.890 19:19:18 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:52.890 19:19:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.890 19:19:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:52.890 19:19:18 -- setup/common.sh@46 -- # (( part++ )) 00:04:52.890 19:19:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.890 19:19:18 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:52.890 19:19:18 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:52.890 19:19:18 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:53.827 Creating new GPT entries in memory. 00:04:53.827 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:53.827 other utilities. 00:04:53.827 19:19:19 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:53.827 19:19:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.827 19:19:19 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:53.827 19:19:19 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:53.827 19:19:19 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:54.766 Creating new GPT entries in memory. 00:04:54.766 The operation has completed successfully. 00:04:54.766 19:19:20 -- setup/common.sh@57 -- # (( part++ )) 00:04:54.766 19:19:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.766 19:19:20 -- setup/common.sh@62 -- # wait 59050 00:04:54.766 19:19:20 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:54.767 19:19:20 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:54.767 19:19:20 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.026 19:19:20 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:55.026 19:19:20 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:55.026 19:19:20 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.026 19:19:20 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.026 19:19:20 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:55.026 19:19:20 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:55.026 19:19:20 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:55.026 19:19:20 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:55.026 19:19:20 -- setup/devices.sh@53 -- # local found=0 00:04:55.026 19:19:20 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:55.026 19:19:20 -- setup/devices.sh@56 -- # : 00:04:55.026 19:19:20 -- setup/devices.sh@59 -- # local pci status 00:04:55.026 19:19:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.026 19:19:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:55.026 19:19:20 -- setup/devices.sh@47 -- # setup output config 00:04:55.026 19:19:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.026 19:19:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:55.285 19:19:20 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.285 19:19:20 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:55.285 19:19:20 -- setup/devices.sh@63 -- # found=1 00:04:55.285 19:19:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.285 19:19:20 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.285 19:19:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.285 19:19:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.285 19:19:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.565 19:19:21 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.565 19:19:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.565 19:19:21 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.565 19:19:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.840 19:19:21 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:55.840 19:19:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.099 19:19:21 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.099 19:19:21 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:56.099 19:19:21 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.099 19:19:21 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.099 19:19:21 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:56.099 19:19:21 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:56.099 19:19:21 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.099 19:19:21 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.099 19:19:21 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:56.099 19:19:21 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:56.099 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:56.099 19:19:21 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:56.099 19:19:21 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:56.358 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:56.358 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:56.358 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:56.358 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:56.358 19:19:21 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:56.358 19:19:21 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:56.358 19:19:21 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.358 19:19:21 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:56.358 19:19:21 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:56.358 19:19:21 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.358 19:19:21 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:56.358 19:19:21 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:56.358 19:19:21 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:56.358 19:19:21 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:56.358 19:19:21 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:56.358 19:19:21 -- setup/devices.sh@53 -- # local found=0 00:04:56.358 19:19:21 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:56.358 19:19:21 -- setup/devices.sh@56 -- # : 00:04:56.358 19:19:21 -- setup/devices.sh@59 -- # local pci status 00:04:56.358 19:19:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.358 19:19:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:56.358 19:19:21 -- setup/devices.sh@47 -- # setup output config 00:04:56.358 19:19:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.358 19:19:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:56.617 19:19:22 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.617 19:19:22 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:56.617 19:19:22 -- setup/devices.sh@63 -- # found=1 00:04:56.617 19:19:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.617 19:19:22 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.617 19:19:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.878 19:19:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.878 19:19:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.878 19:19:22 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.878 19:19:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.878 19:19:22 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:56.878 19:19:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.446 19:19:22 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:57.446 19:19:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.446 19:19:23 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:57.446 19:19:23 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:57.446 19:19:23 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:57.446 19:19:23 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:57.446 19:19:23 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:57.446 19:19:23 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:57.446 19:19:23 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:57.446 19:19:23 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:57.446 19:19:23 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:57.446 19:19:23 -- setup/devices.sh@50 -- # local mount_point= 00:04:57.446 19:19:23 -- setup/devices.sh@51 -- # local test_file= 00:04:57.446 19:19:23 -- setup/devices.sh@53 -- # local found=0 00:04:57.446 19:19:23 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:57.446 19:19:23 -- setup/devices.sh@59 -- # local pci status 00:04:57.446 19:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.446 19:19:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:57.446 19:19:23 -- setup/devices.sh@47 -- # setup output config 00:04:57.446 19:19:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.446 19:19:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:58.015 19:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:58.015 19:19:23 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:58.015 19:19:23 -- setup/devices.sh@63 -- # found=1 00:04:58.015 19:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.015 19:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:58.015 19:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.015 19:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:58.015 19:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.273 19:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:58.273 19:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.274 19:19:23 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:58.274 19:19:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.532 19:19:24 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:58.532 19:19:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.791 19:19:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.791 19:19:24 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:58.791 19:19:24 -- setup/devices.sh@68 -- # return 0 00:04:58.791 19:19:24 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:58.791 19:19:24 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:58.792 19:19:24 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.792 19:19:24 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.792 19:19:24 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:58.792 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:58.792 00:04:58.792 real 0m6.138s 00:04:58.792 user 0m1.574s 00:04:58.792 sys 0m2.176s 00:04:58.792 19:19:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.792 19:19:24 -- common/autotest_common.sh@10 -- # set +x 00:04:58.792 ************************************ 00:04:58.792 END TEST nvme_mount 00:04:58.792 ************************************ 00:04:59.050 19:19:24 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:59.050 19:19:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.050 19:19:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.050 19:19:24 -- common/autotest_common.sh@10 -- # set +x 00:04:59.050 ************************************ 00:04:59.051 START TEST dm_mount 00:04:59.051 ************************************ 00:04:59.051 19:19:24 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:59.051 19:19:24 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:59.051 19:19:24 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:59.051 19:19:24 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:59.051 19:19:24 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:59.051 19:19:24 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:59.051 19:19:24 -- setup/common.sh@40 -- # local part_no=2 00:04:59.051 19:19:24 -- setup/common.sh@41 -- # local size=1073741824 00:04:59.051 19:19:24 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:59.051 19:19:24 -- setup/common.sh@44 -- # parts=() 00:04:59.051 19:19:24 -- setup/common.sh@44 -- # local parts 00:04:59.051 19:19:24 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:59.051 19:19:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.051 19:19:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:59.051 19:19:24 -- setup/common.sh@46 -- # (( part++ )) 00:04:59.051 19:19:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.051 19:19:24 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:59.051 19:19:24 -- setup/common.sh@46 -- # (( part++ )) 00:04:59.051 19:19:24 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:59.051 19:19:24 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:59.051 19:19:24 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:59.051 19:19:24 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:00.430 Creating new GPT entries in memory. 00:05:00.430 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:00.430 other utilities. 00:05:00.430 19:19:25 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:00.430 19:19:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:00.430 19:19:25 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:00.430 19:19:25 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:00.430 19:19:25 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:01.368 Creating new GPT entries in memory. 00:05:01.368 The operation has completed successfully. 00:05:01.368 19:19:26 -- setup/common.sh@57 -- # (( part++ )) 00:05:01.368 19:19:26 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:01.368 19:19:26 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:01.368 19:19:26 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:01.368 19:19:26 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:02.306 The operation has completed successfully. 00:05:02.306 19:19:27 -- setup/common.sh@57 -- # (( part++ )) 00:05:02.306 19:19:27 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.306 19:19:27 -- setup/common.sh@62 -- # wait 59691 00:05:02.306 19:19:27 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:02.306 19:19:27 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.306 19:19:27 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:02.306 19:19:27 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:02.609 19:19:28 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:02.609 19:19:28 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:02.609 19:19:28 -- setup/devices.sh@161 -- # break 00:05:02.609 19:19:28 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:02.609 19:19:28 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:02.609 19:19:28 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:02.609 19:19:28 -- setup/devices.sh@166 -- # dm=dm-0 00:05:02.609 19:19:28 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:02.609 19:19:28 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:02.609 19:19:28 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.609 19:19:28 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:02.609 19:19:28 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.609 19:19:28 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:02.609 19:19:28 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:02.609 19:19:28 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.609 19:19:28 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:02.609 19:19:28 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:02.609 19:19:28 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:02.609 19:19:28 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:02.609 19:19:28 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:02.609 19:19:28 -- setup/devices.sh@53 -- # local found=0 00:05:02.609 19:19:28 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:02.609 19:19:28 -- setup/devices.sh@56 -- # : 00:05:02.609 19:19:28 -- setup/devices.sh@59 -- # local pci status 00:05:02.609 19:19:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.609 19:19:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:02.609 19:19:28 -- setup/devices.sh@47 -- # setup output config 00:05:02.609 19:19:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.609 19:19:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:02.887 19:19:28 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:02.887 19:19:28 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:02.887 19:19:28 -- setup/devices.sh@63 -- # found=1 00:05:02.887 19:19:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.887 19:19:28 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:02.887 19:19:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:02.887 19:19:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:02.887 19:19:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.147 19:19:28 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:03.147 19:19:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.147 19:19:28 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:03.147 19:19:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.406 19:19:28 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:03.406 19:19:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.666 19:19:29 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:03.666 19:19:29 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:03.666 19:19:29 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:03.666 19:19:29 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:03.666 19:19:29 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:03.666 19:19:29 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:03.666 19:19:29 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:03.666 19:19:29 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:03.666 19:19:29 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:03.666 19:19:29 -- setup/devices.sh@50 -- # local mount_point= 00:05:03.666 19:19:29 -- setup/devices.sh@51 -- # local test_file= 00:05:03.666 19:19:29 -- setup/devices.sh@53 -- # local found=0 00:05:03.666 19:19:29 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:03.666 19:19:29 -- setup/devices.sh@59 -- # local pci status 00:05:03.666 19:19:29 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:03.666 19:19:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.666 19:19:29 -- setup/devices.sh@47 -- # setup output config 00:05:03.666 19:19:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.666 19:19:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:03.925 19:19:29 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:03.925 19:19:29 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:03.925 19:19:29 -- setup/devices.sh@63 -- # found=1 00:05:03.925 19:19:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.925 19:19:29 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:03.925 19:19:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.184 19:19:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:04.184 19:19:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.184 19:19:29 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:04.184 19:19:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.184 19:19:29 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:04.184 19:19:29 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.442 19:19:30 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:04.442 19:19:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.699 19:19:30 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.699 19:19:30 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:04.699 19:19:30 -- setup/devices.sh@68 -- # return 0 00:05:04.699 19:19:30 -- setup/devices.sh@187 -- # cleanup_dm 00:05:04.699 19:19:30 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:04.699 19:19:30 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:04.699 19:19:30 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:04.699 19:19:30 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.699 19:19:30 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:04.699 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:04.699 19:19:30 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:04.699 19:19:30 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:04.956 00:05:04.956 real 0m5.778s 00:05:04.957 user 0m1.055s 00:05:04.957 sys 0m1.414s 00:05:04.957 19:19:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.957 19:19:30 -- common/autotest_common.sh@10 -- # set +x 00:05:04.957 ************************************ 00:05:04.957 END TEST dm_mount 00:05:04.957 ************************************ 00:05:04.957 19:19:30 -- setup/devices.sh@1 -- # cleanup 00:05:04.957 19:19:30 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:04.957 19:19:30 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:04.957 19:19:30 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.957 19:19:30 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:04.957 19:19:30 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.957 19:19:30 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:05.215 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:05.215 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:05.215 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:05.215 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:05.215 19:19:30 -- setup/devices.sh@12 -- # cleanup_dm 00:05:05.215 19:19:30 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:05.215 19:19:30 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:05.215 19:19:30 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:05.215 19:19:30 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:05.215 19:19:30 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:05.215 19:19:30 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:05.215 00:05:05.215 real 0m14.367s 00:05:05.215 user 0m3.646s 00:05:05.215 sys 0m4.720s 00:05:05.215 19:19:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.215 19:19:30 -- common/autotest_common.sh@10 -- # set +x 00:05:05.215 ************************************ 00:05:05.215 END TEST devices 00:05:05.215 ************************************ 00:05:05.215 00:05:05.215 real 0m50.325s 00:05:05.215 user 0m11.952s 00:05:05.215 sys 0m17.937s 00:05:05.215 19:19:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.215 19:19:30 -- common/autotest_common.sh@10 -- # set +x 00:05:05.215 ************************************ 00:05:05.215 END TEST setup.sh 00:05:05.215 ************************************ 00:05:05.215 19:19:30 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:05.783 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:06.376 Hugepages 00:05:06.376 node hugesize free / total 00:05:06.376 node0 1048576kB 0 / 0 00:05:06.376 node0 2048kB 2048 / 2048 00:05:06.376 00:05:06.376 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:06.645 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:06.645 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:06.645 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:06.904 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:06.904 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:06.904 19:19:32 -- spdk/autotest.sh@130 -- # uname -s 00:05:06.904 19:19:32 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:06.904 19:19:32 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:06.904 19:19:32 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:07.471 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:08.409 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.409 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.409 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.409 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:08.409 19:19:33 -- common/autotest_common.sh@1518 -- # sleep 1 00:05:09.347 19:19:34 -- common/autotest_common.sh@1519 -- # bdfs=() 00:05:09.347 19:19:34 -- common/autotest_common.sh@1519 -- # local bdfs 00:05:09.347 19:19:34 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:09.347 19:19:34 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:09.347 19:19:34 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:09.347 19:19:34 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:09.347 19:19:34 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.347 19:19:34 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:09.347 19:19:34 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:09.605 19:19:35 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:05:09.605 19:19:35 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:09.605 19:19:35 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.173 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:10.432 Waiting for block devices as requested 00:05:10.432 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:10.432 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:10.432 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:10.692 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:15.994 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:15.994 19:19:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:15.994 19:19:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:15.994 19:19:41 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:15.994 19:19:41 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:05:15.994 19:19:41 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:15.994 19:19:41 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:15.994 19:19:41 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:05:15.994 19:19:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:15.994 19:19:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:15.994 19:19:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:15.994 19:19:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:15.994 19:19:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1543 -- # continue 00:05:15.994 19:19:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:15.994 19:19:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:15.994 19:19:41 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:15.994 19:19:41 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:05:15.994 19:19:41 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:15.994 19:19:41 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:15.994 19:19:41 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:05:15.994 19:19:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:15.994 19:19:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:15.994 19:19:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:15.994 19:19:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:15.994 19:19:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1543 -- # continue 00:05:15.994 19:19:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:15.994 19:19:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:15.994 19:19:41 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:15.994 19:19:41 -- common/autotest_common.sh@1488 -- # grep 0000:00:12.0/nvme/nvme 00:05:15.994 19:19:41 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:15.994 19:19:41 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:15.994 19:19:41 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme2 00:05:15.994 19:19:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:15.994 19:19:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:15.994 19:19:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:15.994 19:19:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:15.994 19:19:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1543 -- # continue 00:05:15.994 19:19:41 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:15.994 19:19:41 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:15.994 19:19:41 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:15.994 19:19:41 -- common/autotest_common.sh@1488 -- # grep 0000:00:13.0/nvme/nvme 00:05:15.994 19:19:41 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:15.994 19:19:41 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:15.994 19:19:41 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme3 00:05:15.994 19:19:41 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:15.994 19:19:41 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:15.994 19:19:41 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:15.994 19:19:41 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:15.994 19:19:41 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:15.994 19:19:41 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:15.994 19:19:41 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:15.994 19:19:41 -- common/autotest_common.sh@1543 -- # continue 00:05:15.994 19:19:41 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:15.994 19:19:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:15.994 19:19:41 -- common/autotest_common.sh@10 -- # set +x 00:05:15.994 19:19:41 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:15.994 19:19:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:15.994 19:19:41 -- common/autotest_common.sh@10 -- # set +x 00:05:15.995 19:19:41 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:16.564 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.132 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.132 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.392 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.392 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.392 19:19:43 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:17.392 19:19:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:17.392 19:19:43 -- common/autotest_common.sh@10 -- # set +x 00:05:17.650 19:19:43 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:17.650 19:19:43 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:05:17.650 19:19:43 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:05:17.650 19:19:43 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:17.650 19:19:43 -- common/autotest_common.sh@1563 -- # local bdfs 00:05:17.650 19:19:43 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:05:17.650 19:19:43 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:17.650 19:19:43 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:17.650 19:19:43 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:17.650 19:19:43 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:17.650 19:19:43 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:17.650 19:19:43 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:05:17.650 19:19:43 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:17.650 19:19:43 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:05:17.650 19:19:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:17.650 19:19:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:17.650 19:19:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:17.650 19:19:43 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:05:17.650 19:19:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:17.650 19:19:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:17.650 19:19:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:17.650 19:19:43 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:05:17.650 19:19:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:17.650 19:19:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:17.650 19:19:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:17.650 19:19:43 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:05:17.650 19:19:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:17.650 19:19:43 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:17.650 19:19:43 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:17.650 19:19:43 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:05:17.650 19:19:43 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:05:17.650 19:19:43 -- common/autotest_common.sh@1579 -- # return 0 00:05:17.650 19:19:43 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:17.650 19:19:43 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:17.650 19:19:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:17.650 19:19:43 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:17.650 19:19:43 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:17.650 19:19:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:17.650 19:19:43 -- common/autotest_common.sh@10 -- # set +x 00:05:17.650 19:19:43 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:17.650 19:19:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.650 19:19:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.650 19:19:43 -- common/autotest_common.sh@10 -- # set +x 00:05:17.650 ************************************ 00:05:17.650 START TEST env 00:05:17.650 ************************************ 00:05:17.650 19:19:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:17.909 * Looking for test storage... 00:05:17.909 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:17.909 19:19:43 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:17.909 19:19:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.909 19:19:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.909 19:19:43 -- common/autotest_common.sh@10 -- # set +x 00:05:17.909 ************************************ 00:05:17.909 START TEST env_memory 00:05:17.909 ************************************ 00:05:17.909 19:19:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:17.909 00:05:17.909 00:05:17.909 CUnit - A unit testing framework for C - Version 2.1-3 00:05:17.909 http://cunit.sourceforge.net/ 00:05:17.909 00:05:17.909 00:05:17.909 Suite: memory 00:05:18.169 Test: alloc and free memory map ...[2024-04-24 19:19:43.600229] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:18.169 passed 00:05:18.169 Test: mem map translation ...[2024-04-24 19:19:43.640375] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:18.169 [2024-04-24 19:19:43.640455] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:18.169 [2024-04-24 19:19:43.640547] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:18.169 [2024-04-24 19:19:43.640601] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:18.169 passed 00:05:18.169 Test: mem map registration ...[2024-04-24 19:19:43.703142] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:18.169 [2024-04-24 19:19:43.703220] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:18.169 passed 00:05:18.169 Test: mem map adjacent registrations ...passed 00:05:18.169 00:05:18.169 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.169 suites 1 1 n/a 0 0 00:05:18.169 tests 4 4 4 0 0 00:05:18.169 asserts 152 152 152 0 n/a 00:05:18.169 00:05:18.169 Elapsed time = 0.231 seconds 00:05:18.169 00:05:18.169 real 0m0.275s 00:05:18.169 user 0m0.246s 00:05:18.169 sys 0m0.027s 00:05:18.169 19:19:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:18.169 19:19:43 -- common/autotest_common.sh@10 -- # set +x 00:05:18.169 ************************************ 00:05:18.169 END TEST env_memory 00:05:18.169 ************************************ 00:05:18.428 19:19:43 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:18.428 19:19:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:18.428 19:19:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:18.428 19:19:43 -- common/autotest_common.sh@10 -- # set +x 00:05:18.428 ************************************ 00:05:18.428 START TEST env_vtophys 00:05:18.428 ************************************ 00:05:18.428 19:19:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:18.428 EAL: lib.eal log level changed from notice to debug 00:05:18.428 EAL: Detected lcore 0 as core 0 on socket 0 00:05:18.428 EAL: Detected lcore 1 as core 0 on socket 0 00:05:18.428 EAL: Detected lcore 2 as core 0 on socket 0 00:05:18.428 EAL: Detected lcore 3 as core 0 on socket 0 00:05:18.428 EAL: Detected lcore 4 as core 0 on socket 0 00:05:18.428 EAL: Detected lcore 5 as core 0 on socket 0 00:05:18.428 EAL: Detected lcore 6 as core 0 on socket 0 00:05:18.428 EAL: Detected lcore 7 as core 0 on socket 0 00:05:18.428 EAL: Detected lcore 8 as core 0 on socket 0 00:05:18.428 EAL: Detected lcore 9 as core 0 on socket 0 00:05:18.428 EAL: Maximum logical cores by configuration: 128 00:05:18.428 EAL: Detected CPU lcores: 10 00:05:18.428 EAL: Detected NUMA nodes: 1 00:05:18.428 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:18.428 EAL: Detected shared linkage of DPDK 00:05:18.428 EAL: No shared files mode enabled, IPC will be disabled 00:05:18.428 EAL: Selected IOVA mode 'PA' 00:05:18.428 EAL: Probing VFIO support... 00:05:18.428 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:18.428 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:18.428 EAL: Ask a virtual area of 0x2e000 bytes 00:05:18.428 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:18.428 EAL: Setting up physically contiguous memory... 00:05:18.428 EAL: Setting maximum number of open files to 524288 00:05:18.428 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:18.428 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:18.428 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.428 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:18.428 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.428 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.428 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:18.428 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:18.428 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.428 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:18.428 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.428 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.428 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:18.428 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:18.428 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.428 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:18.428 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.428 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.428 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:18.428 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:18.428 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.428 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:18.428 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.429 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.429 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:18.429 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:18.429 EAL: Hugepages will be freed exactly as allocated. 00:05:18.429 EAL: No shared files mode enabled, IPC is disabled 00:05:18.429 EAL: No shared files mode enabled, IPC is disabled 00:05:18.687 EAL: TSC frequency is ~2290000 KHz 00:05:18.687 EAL: Main lcore 0 is ready (tid=7f38102cba40;cpuset=[0]) 00:05:18.687 EAL: Trying to obtain current memory policy. 00:05:18.687 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.687 EAL: Restoring previous memory policy: 0 00:05:18.687 EAL: request: mp_malloc_sync 00:05:18.687 EAL: No shared files mode enabled, IPC is disabled 00:05:18.687 EAL: Heap on socket 0 was expanded by 2MB 00:05:18.687 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:18.687 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:18.687 EAL: Mem event callback 'spdk:(nil)' registered 00:05:18.687 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:18.687 00:05:18.687 00:05:18.687 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.687 http://cunit.sourceforge.net/ 00:05:18.687 00:05:18.687 00:05:18.687 Suite: components_suite 00:05:18.946 Test: vtophys_malloc_test ...passed 00:05:18.946 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:18.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.946 EAL: Restoring previous memory policy: 4 00:05:18.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.946 EAL: request: mp_malloc_sync 00:05:18.946 EAL: No shared files mode enabled, IPC is disabled 00:05:18.946 EAL: Heap on socket 0 was expanded by 4MB 00:05:18.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.946 EAL: request: mp_malloc_sync 00:05:18.946 EAL: No shared files mode enabled, IPC is disabled 00:05:18.946 EAL: Heap on socket 0 was shrunk by 4MB 00:05:18.946 EAL: Trying to obtain current memory policy. 00:05:18.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.946 EAL: Restoring previous memory policy: 4 00:05:18.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.946 EAL: request: mp_malloc_sync 00:05:18.946 EAL: No shared files mode enabled, IPC is disabled 00:05:18.946 EAL: Heap on socket 0 was expanded by 6MB 00:05:18.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.946 EAL: request: mp_malloc_sync 00:05:18.946 EAL: No shared files mode enabled, IPC is disabled 00:05:18.946 EAL: Heap on socket 0 was shrunk by 6MB 00:05:18.946 EAL: Trying to obtain current memory policy. 00:05:18.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.946 EAL: Restoring previous memory policy: 4 00:05:18.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.946 EAL: request: mp_malloc_sync 00:05:18.946 EAL: No shared files mode enabled, IPC is disabled 00:05:18.946 EAL: Heap on socket 0 was expanded by 10MB 00:05:18.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.946 EAL: request: mp_malloc_sync 00:05:18.946 EAL: No shared files mode enabled, IPC is disabled 00:05:18.946 EAL: Heap on socket 0 was shrunk by 10MB 00:05:18.946 EAL: Trying to obtain current memory policy. 00:05:18.946 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.946 EAL: Restoring previous memory policy: 4 00:05:18.946 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.946 EAL: request: mp_malloc_sync 00:05:18.946 EAL: No shared files mode enabled, IPC is disabled 00:05:18.946 EAL: Heap on socket 0 was expanded by 18MB 00:05:19.205 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.205 EAL: request: mp_malloc_sync 00:05:19.205 EAL: No shared files mode enabled, IPC is disabled 00:05:19.205 EAL: Heap on socket 0 was shrunk by 18MB 00:05:19.205 EAL: Trying to obtain current memory policy. 00:05:19.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.205 EAL: Restoring previous memory policy: 4 00:05:19.205 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.205 EAL: request: mp_malloc_sync 00:05:19.205 EAL: No shared files mode enabled, IPC is disabled 00:05:19.205 EAL: Heap on socket 0 was expanded by 34MB 00:05:19.205 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.205 EAL: request: mp_malloc_sync 00:05:19.205 EAL: No shared files mode enabled, IPC is disabled 00:05:19.205 EAL: Heap on socket 0 was shrunk by 34MB 00:05:19.205 EAL: Trying to obtain current memory policy. 00:05:19.205 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.205 EAL: Restoring previous memory policy: 4 00:05:19.205 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.205 EAL: request: mp_malloc_sync 00:05:19.205 EAL: No shared files mode enabled, IPC is disabled 00:05:19.205 EAL: Heap on socket 0 was expanded by 66MB 00:05:19.476 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.476 EAL: request: mp_malloc_sync 00:05:19.476 EAL: No shared files mode enabled, IPC is disabled 00:05:19.476 EAL: Heap on socket 0 was shrunk by 66MB 00:05:19.476 EAL: Trying to obtain current memory policy. 00:05:19.476 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.476 EAL: Restoring previous memory policy: 4 00:05:19.476 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.476 EAL: request: mp_malloc_sync 00:05:19.476 EAL: No shared files mode enabled, IPC is disabled 00:05:19.476 EAL: Heap on socket 0 was expanded by 130MB 00:05:19.736 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.736 EAL: request: mp_malloc_sync 00:05:19.736 EAL: No shared files mode enabled, IPC is disabled 00:05:19.736 EAL: Heap on socket 0 was shrunk by 130MB 00:05:19.996 EAL: Trying to obtain current memory policy. 00:05:19.996 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.996 EAL: Restoring previous memory policy: 4 00:05:19.996 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.996 EAL: request: mp_malloc_sync 00:05:19.996 EAL: No shared files mode enabled, IPC is disabled 00:05:19.996 EAL: Heap on socket 0 was expanded by 258MB 00:05:20.566 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.825 EAL: request: mp_malloc_sync 00:05:20.825 EAL: No shared files mode enabled, IPC is disabled 00:05:20.825 EAL: Heap on socket 0 was shrunk by 258MB 00:05:21.084 EAL: Trying to obtain current memory policy. 00:05:21.084 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.084 EAL: Restoring previous memory policy: 4 00:05:21.084 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.084 EAL: request: mp_malloc_sync 00:05:21.084 EAL: No shared files mode enabled, IPC is disabled 00:05:21.084 EAL: Heap on socket 0 was expanded by 514MB 00:05:22.463 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.463 EAL: request: mp_malloc_sync 00:05:22.463 EAL: No shared files mode enabled, IPC is disabled 00:05:22.463 EAL: Heap on socket 0 was shrunk by 514MB 00:05:23.399 EAL: Trying to obtain current memory policy. 00:05:23.399 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:23.399 EAL: Restoring previous memory policy: 4 00:05:23.399 EAL: Calling mem event callback 'spdk:(nil)' 00:05:23.399 EAL: request: mp_malloc_sync 00:05:23.399 EAL: No shared files mode enabled, IPC is disabled 00:05:23.399 EAL: Heap on socket 0 was expanded by 1026MB 00:05:25.934 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.934 EAL: request: mp_malloc_sync 00:05:25.934 EAL: No shared files mode enabled, IPC is disabled 00:05:25.934 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:27.319 passed 00:05:27.319 00:05:27.319 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.319 suites 1 1 n/a 0 0 00:05:27.319 tests 2 2 2 0 0 00:05:27.319 asserts 5306 5306 5306 0 n/a 00:05:27.319 00:05:27.319 Elapsed time = 8.651 seconds 00:05:27.319 EAL: Calling mem event callback 'spdk:(nil)' 00:05:27.319 EAL: request: mp_malloc_sync 00:05:27.319 EAL: No shared files mode enabled, IPC is disabled 00:05:27.319 EAL: Heap on socket 0 was shrunk by 2MB 00:05:27.319 EAL: No shared files mode enabled, IPC is disabled 00:05:27.319 EAL: No shared files mode enabled, IPC is disabled 00:05:27.319 EAL: No shared files mode enabled, IPC is disabled 00:05:27.319 00:05:27.319 real 0m8.945s 00:05:27.319 user 0m7.975s 00:05:27.319 sys 0m0.811s 00:05:27.319 19:19:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.319 ************************************ 00:05:27.319 END TEST env_vtophys 00:05:27.319 ************************************ 00:05:27.319 19:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:27.319 19:19:52 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:27.319 19:19:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.319 19:19:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.319 19:19:52 -- common/autotest_common.sh@10 -- # set +x 00:05:27.600 ************************************ 00:05:27.600 START TEST env_pci 00:05:27.600 ************************************ 00:05:27.600 19:19:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:27.600 00:05:27.600 00:05:27.600 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.600 http://cunit.sourceforge.net/ 00:05:27.600 00:05:27.600 00:05:27.600 Suite: pci 00:05:27.600 Test: pci_hook ...[2024-04-24 19:19:53.061821] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61577 has claimed it 00:05:27.600 passed 00:05:27.600 00:05:27.600 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.600 suites 1 1 n/a 0 0 00:05:27.600 tests 1 1 1 0 0 00:05:27.600 asserts 25 25 25 0 n/a 00:05:27.600 00:05:27.600 Elapsed time = 0.009 seconds 00:05:27.600 EAL: Cannot find device (10000:00:01.0) 00:05:27.600 EAL: Failed to attach device on primary process 00:05:27.600 00:05:27.600 real 0m0.103s 00:05:27.600 user 0m0.046s 00:05:27.600 sys 0m0.055s 00:05:27.600 19:19:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.600 19:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:27.600 ************************************ 00:05:27.600 END TEST env_pci 00:05:27.600 ************************************ 00:05:27.600 19:19:53 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:27.600 19:19:53 -- env/env.sh@15 -- # uname 00:05:27.600 19:19:53 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:27.600 19:19:53 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:27.600 19:19:53 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.600 19:19:53 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:27.600 19:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.600 19:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:27.860 ************************************ 00:05:27.860 START TEST env_dpdk_post_init 00:05:27.860 ************************************ 00:05:27.860 19:19:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.860 EAL: Detected CPU lcores: 10 00:05:27.860 EAL: Detected NUMA nodes: 1 00:05:27.860 EAL: Detected shared linkage of DPDK 00:05:27.860 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:27.860 EAL: Selected IOVA mode 'PA' 00:05:27.860 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:27.860 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:27.860 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:27.860 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:27.860 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:28.119 Starting DPDK initialization... 00:05:28.119 Starting SPDK post initialization... 00:05:28.119 SPDK NVMe probe 00:05:28.119 Attaching to 0000:00:10.0 00:05:28.119 Attaching to 0000:00:11.0 00:05:28.119 Attaching to 0000:00:12.0 00:05:28.119 Attaching to 0000:00:13.0 00:05:28.119 Attached to 0000:00:10.0 00:05:28.119 Attached to 0000:00:11.0 00:05:28.119 Attached to 0000:00:13.0 00:05:28.119 Attached to 0000:00:12.0 00:05:28.119 Cleaning up... 00:05:28.119 ************************************ 00:05:28.119 END TEST env_dpdk_post_init 00:05:28.119 ************************************ 00:05:28.119 00:05:28.119 real 0m0.283s 00:05:28.119 user 0m0.096s 00:05:28.119 sys 0m0.092s 00:05:28.119 19:19:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:28.119 19:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:28.119 19:19:53 -- env/env.sh@26 -- # uname 00:05:28.119 19:19:53 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:28.119 19:19:53 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.119 19:19:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.119 19:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.119 19:19:53 -- common/autotest_common.sh@10 -- # set +x 00:05:28.119 ************************************ 00:05:28.119 START TEST env_mem_callbacks 00:05:28.119 ************************************ 00:05:28.119 19:19:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.119 EAL: Detected CPU lcores: 10 00:05:28.119 EAL: Detected NUMA nodes: 1 00:05:28.119 EAL: Detected shared linkage of DPDK 00:05:28.119 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.119 EAL: Selected IOVA mode 'PA' 00:05:28.378 00:05:28.378 00:05:28.378 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.378 http://cunit.sourceforge.net/ 00:05:28.378 00:05:28.378 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.378 00:05:28.378 Suite: memory 00:05:28.378 Test: test ... 00:05:28.378 register 0x200000200000 2097152 00:05:28.378 malloc 3145728 00:05:28.378 register 0x200000400000 4194304 00:05:28.378 buf 0x2000004fffc0 len 3145728 PASSED 00:05:28.378 malloc 64 00:05:28.378 buf 0x2000004ffec0 len 64 PASSED 00:05:28.378 malloc 4194304 00:05:28.378 register 0x200000800000 6291456 00:05:28.378 buf 0x2000009fffc0 len 4194304 PASSED 00:05:28.378 free 0x2000004fffc0 3145728 00:05:28.378 free 0x2000004ffec0 64 00:05:28.378 unregister 0x200000400000 4194304 PASSED 00:05:28.378 free 0x2000009fffc0 4194304 00:05:28.378 unregister 0x200000800000 6291456 PASSED 00:05:28.378 malloc 8388608 00:05:28.378 register 0x200000400000 10485760 00:05:28.378 buf 0x2000005fffc0 len 8388608 PASSED 00:05:28.378 free 0x2000005fffc0 8388608 00:05:28.378 unregister 0x200000400000 10485760 PASSED 00:05:28.378 passed 00:05:28.378 00:05:28.378 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.378 suites 1 1 n/a 0 0 00:05:28.378 tests 1 1 1 0 0 00:05:28.378 asserts 15 15 15 0 n/a 00:05:28.378 00:05:28.378 Elapsed time = 0.097 seconds 00:05:28.378 00:05:28.378 real 0m0.299s 00:05:28.378 user 0m0.126s 00:05:28.378 sys 0m0.069s 00:05:28.378 19:19:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:28.378 19:19:54 -- common/autotest_common.sh@10 -- # set +x 00:05:28.378 ************************************ 00:05:28.378 END TEST env_mem_callbacks 00:05:28.378 ************************************ 00:05:28.637 00:05:28.637 real 0m10.743s 00:05:28.637 user 0m8.792s 00:05:28.637 sys 0m1.538s 00:05:28.637 19:19:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:28.637 19:19:54 -- common/autotest_common.sh@10 -- # set +x 00:05:28.638 ************************************ 00:05:28.638 END TEST env 00:05:28.638 ************************************ 00:05:28.638 19:19:54 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:28.638 19:19:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.638 19:19:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.638 19:19:54 -- common/autotest_common.sh@10 -- # set +x 00:05:28.638 ************************************ 00:05:28.638 START TEST rpc 00:05:28.638 ************************************ 00:05:28.638 19:19:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:28.638 * Looking for test storage... 00:05:28.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:28.897 19:19:54 -- rpc/rpc.sh@65 -- # spdk_pid=61709 00:05:28.897 19:19:54 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.897 19:19:54 -- rpc/rpc.sh@67 -- # waitforlisten 61709 00:05:28.897 19:19:54 -- common/autotest_common.sh@817 -- # '[' -z 61709 ']' 00:05:28.897 19:19:54 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:28.897 19:19:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.897 19:19:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:28.897 19:19:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.897 19:19:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:28.897 19:19:54 -- common/autotest_common.sh@10 -- # set +x 00:05:28.897 [2024-04-24 19:19:54.418521] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:28.897 [2024-04-24 19:19:54.418760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61709 ] 00:05:29.157 [2024-04-24 19:19:54.584155] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.157 [2024-04-24 19:19:54.830838] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:29.157 [2024-04-24 19:19:54.830892] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 61709' to capture a snapshot of events at runtime. 00:05:29.157 [2024-04-24 19:19:54.830913] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:29.157 [2024-04-24 19:19:54.830925] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:29.157 [2024-04-24 19:19:54.830934] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid61709 for offline analysis/debug. 00:05:29.157 [2024-04-24 19:19:54.830970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.536 19:19:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:30.536 19:19:55 -- common/autotest_common.sh@850 -- # return 0 00:05:30.536 19:19:55 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.536 19:19:55 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:30.536 19:19:55 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:30.536 19:19:55 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:30.536 19:19:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.536 19:19:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.536 19:19:55 -- common/autotest_common.sh@10 -- # set +x 00:05:30.536 ************************************ 00:05:30.536 START TEST rpc_integrity 00:05:30.536 ************************************ 00:05:30.536 19:19:55 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:30.536 19:19:55 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:30.536 19:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.536 19:19:55 -- common/autotest_common.sh@10 -- # set +x 00:05:30.536 19:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.536 19:19:55 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:30.536 19:19:55 -- rpc/rpc.sh@13 -- # jq length 00:05:30.536 19:19:55 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:30.536 19:19:55 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:30.536 19:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.536 19:19:55 -- common/autotest_common.sh@10 -- # set +x 00:05:30.536 19:19:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.536 19:19:55 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:30.536 19:19:55 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:30.536 19:19:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.536 19:19:55 -- common/autotest_common.sh@10 -- # set +x 00:05:30.536 19:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.536 19:19:56 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:30.536 { 00:05:30.536 "name": "Malloc0", 00:05:30.536 "aliases": [ 00:05:30.536 "ebd063c4-3ac0-4949-be20-5cfe1e25dd42" 00:05:30.536 ], 00:05:30.536 "product_name": "Malloc disk", 00:05:30.536 "block_size": 512, 00:05:30.536 "num_blocks": 16384, 00:05:30.536 "uuid": "ebd063c4-3ac0-4949-be20-5cfe1e25dd42", 00:05:30.536 "assigned_rate_limits": { 00:05:30.536 "rw_ios_per_sec": 0, 00:05:30.536 "rw_mbytes_per_sec": 0, 00:05:30.536 "r_mbytes_per_sec": 0, 00:05:30.536 "w_mbytes_per_sec": 0 00:05:30.536 }, 00:05:30.536 "claimed": false, 00:05:30.536 "zoned": false, 00:05:30.536 "supported_io_types": { 00:05:30.536 "read": true, 00:05:30.536 "write": true, 00:05:30.536 "unmap": true, 00:05:30.536 "write_zeroes": true, 00:05:30.536 "flush": true, 00:05:30.536 "reset": true, 00:05:30.536 "compare": false, 00:05:30.536 "compare_and_write": false, 00:05:30.536 "abort": true, 00:05:30.536 "nvme_admin": false, 00:05:30.536 "nvme_io": false 00:05:30.536 }, 00:05:30.536 "memory_domains": [ 00:05:30.536 { 00:05:30.536 "dma_device_id": "system", 00:05:30.536 "dma_device_type": 1 00:05:30.536 }, 00:05:30.536 { 00:05:30.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.536 "dma_device_type": 2 00:05:30.536 } 00:05:30.536 ], 00:05:30.536 "driver_specific": {} 00:05:30.536 } 00:05:30.536 ]' 00:05:30.536 19:19:56 -- rpc/rpc.sh@17 -- # jq length 00:05:30.536 19:19:56 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:30.536 19:19:56 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:30.536 19:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.536 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.536 [2024-04-24 19:19:56.066283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:30.536 [2024-04-24 19:19:56.066425] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:30.536 [2024-04-24 19:19:56.066472] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:05:30.536 [2024-04-24 19:19:56.066514] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:30.536 [2024-04-24 19:19:56.068923] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:30.536 [2024-04-24 19:19:56.069006] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:30.536 Passthru0 00:05:30.536 19:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.536 19:19:56 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:30.536 19:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.536 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.536 19:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.536 19:19:56 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:30.536 { 00:05:30.536 "name": "Malloc0", 00:05:30.536 "aliases": [ 00:05:30.536 "ebd063c4-3ac0-4949-be20-5cfe1e25dd42" 00:05:30.536 ], 00:05:30.536 "product_name": "Malloc disk", 00:05:30.536 "block_size": 512, 00:05:30.536 "num_blocks": 16384, 00:05:30.536 "uuid": "ebd063c4-3ac0-4949-be20-5cfe1e25dd42", 00:05:30.536 "assigned_rate_limits": { 00:05:30.536 "rw_ios_per_sec": 0, 00:05:30.536 "rw_mbytes_per_sec": 0, 00:05:30.536 "r_mbytes_per_sec": 0, 00:05:30.536 "w_mbytes_per_sec": 0 00:05:30.536 }, 00:05:30.536 "claimed": true, 00:05:30.536 "claim_type": "exclusive_write", 00:05:30.536 "zoned": false, 00:05:30.536 "supported_io_types": { 00:05:30.536 "read": true, 00:05:30.536 "write": true, 00:05:30.536 "unmap": true, 00:05:30.537 "write_zeroes": true, 00:05:30.537 "flush": true, 00:05:30.537 "reset": true, 00:05:30.537 "compare": false, 00:05:30.537 "compare_and_write": false, 00:05:30.537 "abort": true, 00:05:30.537 "nvme_admin": false, 00:05:30.537 "nvme_io": false 00:05:30.537 }, 00:05:30.537 "memory_domains": [ 00:05:30.537 { 00:05:30.537 "dma_device_id": "system", 00:05:30.537 "dma_device_type": 1 00:05:30.537 }, 00:05:30.537 { 00:05:30.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.537 "dma_device_type": 2 00:05:30.537 } 00:05:30.537 ], 00:05:30.537 "driver_specific": {} 00:05:30.537 }, 00:05:30.537 { 00:05:30.537 "name": "Passthru0", 00:05:30.537 "aliases": [ 00:05:30.537 "e0d4dafb-8c33-5f9e-8076-434f4369d6d8" 00:05:30.537 ], 00:05:30.537 "product_name": "passthru", 00:05:30.537 "block_size": 512, 00:05:30.537 "num_blocks": 16384, 00:05:30.537 "uuid": "e0d4dafb-8c33-5f9e-8076-434f4369d6d8", 00:05:30.537 "assigned_rate_limits": { 00:05:30.537 "rw_ios_per_sec": 0, 00:05:30.537 "rw_mbytes_per_sec": 0, 00:05:30.537 "r_mbytes_per_sec": 0, 00:05:30.537 "w_mbytes_per_sec": 0 00:05:30.537 }, 00:05:30.537 "claimed": false, 00:05:30.537 "zoned": false, 00:05:30.537 "supported_io_types": { 00:05:30.537 "read": true, 00:05:30.537 "write": true, 00:05:30.537 "unmap": true, 00:05:30.537 "write_zeroes": true, 00:05:30.537 "flush": true, 00:05:30.537 "reset": true, 00:05:30.537 "compare": false, 00:05:30.537 "compare_and_write": false, 00:05:30.537 "abort": true, 00:05:30.537 "nvme_admin": false, 00:05:30.537 "nvme_io": false 00:05:30.537 }, 00:05:30.537 "memory_domains": [ 00:05:30.537 { 00:05:30.537 "dma_device_id": "system", 00:05:30.537 "dma_device_type": 1 00:05:30.537 }, 00:05:30.537 { 00:05:30.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.537 "dma_device_type": 2 00:05:30.537 } 00:05:30.537 ], 00:05:30.537 "driver_specific": { 00:05:30.537 "passthru": { 00:05:30.537 "name": "Passthru0", 00:05:30.537 "base_bdev_name": "Malloc0" 00:05:30.537 } 00:05:30.537 } 00:05:30.537 } 00:05:30.537 ]' 00:05:30.537 19:19:56 -- rpc/rpc.sh@21 -- # jq length 00:05:30.537 19:19:56 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:30.537 19:19:56 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:30.537 19:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.537 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.537 19:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.537 19:19:56 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:30.537 19:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.537 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.537 19:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.537 19:19:56 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:30.537 19:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.537 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.537 19:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.537 19:19:56 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.537 19:19:56 -- rpc/rpc.sh@26 -- # jq length 00:05:30.797 ************************************ 00:05:30.797 END TEST rpc_integrity 00:05:30.797 ************************************ 00:05:30.797 19:19:56 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.797 00:05:30.797 real 0m0.350s 00:05:30.797 user 0m0.187s 00:05:30.797 sys 0m0.049s 00:05:30.797 19:19:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:30.797 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.797 19:19:56 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:30.797 19:19:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.797 19:19:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.797 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.797 ************************************ 00:05:30.797 START TEST rpc_plugins 00:05:30.797 ************************************ 00:05:30.797 19:19:56 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:05:30.797 19:19:56 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:30.797 19:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.797 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.797 19:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.797 19:19:56 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:30.797 19:19:56 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:30.797 19:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:30.797 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.797 19:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:30.797 19:19:56 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:30.797 { 00:05:30.797 "name": "Malloc1", 00:05:30.797 "aliases": [ 00:05:30.797 "d1870ebd-6363-419c-8de3-0d80e1ef38b4" 00:05:30.797 ], 00:05:30.797 "product_name": "Malloc disk", 00:05:30.797 "block_size": 4096, 00:05:30.797 "num_blocks": 256, 00:05:30.797 "uuid": "d1870ebd-6363-419c-8de3-0d80e1ef38b4", 00:05:30.797 "assigned_rate_limits": { 00:05:30.797 "rw_ios_per_sec": 0, 00:05:30.797 "rw_mbytes_per_sec": 0, 00:05:30.797 "r_mbytes_per_sec": 0, 00:05:30.797 "w_mbytes_per_sec": 0 00:05:30.797 }, 00:05:30.797 "claimed": false, 00:05:30.797 "zoned": false, 00:05:30.797 "supported_io_types": { 00:05:30.797 "read": true, 00:05:30.797 "write": true, 00:05:30.797 "unmap": true, 00:05:30.797 "write_zeroes": true, 00:05:30.797 "flush": true, 00:05:30.797 "reset": true, 00:05:30.797 "compare": false, 00:05:30.797 "compare_and_write": false, 00:05:30.797 "abort": true, 00:05:30.797 "nvme_admin": false, 00:05:30.797 "nvme_io": false 00:05:30.797 }, 00:05:30.797 "memory_domains": [ 00:05:30.797 { 00:05:30.797 "dma_device_id": "system", 00:05:30.797 "dma_device_type": 1 00:05:30.797 }, 00:05:30.797 { 00:05:30.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:30.797 "dma_device_type": 2 00:05:30.797 } 00:05:30.797 ], 00:05:30.797 "driver_specific": {} 00:05:30.797 } 00:05:30.797 ]' 00:05:30.797 19:19:56 -- rpc/rpc.sh@32 -- # jq length 00:05:31.056 19:19:56 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:31.056 19:19:56 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:31.056 19:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.056 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:31.056 19:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.056 19:19:56 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:31.056 19:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.056 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:31.056 19:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.056 19:19:56 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:31.056 19:19:56 -- rpc/rpc.sh@36 -- # jq length 00:05:31.056 ************************************ 00:05:31.056 END TEST rpc_plugins 00:05:31.056 ************************************ 00:05:31.056 19:19:56 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:31.056 00:05:31.056 real 0m0.172s 00:05:31.056 user 0m0.101s 00:05:31.056 sys 0m0.025s 00:05:31.056 19:19:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.056 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:31.056 19:19:56 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:31.056 19:19:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.056 19:19:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.056 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:31.056 ************************************ 00:05:31.056 START TEST rpc_trace_cmd_test 00:05:31.056 ************************************ 00:05:31.056 19:19:56 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:05:31.056 19:19:56 -- rpc/rpc.sh@40 -- # local info 00:05:31.056 19:19:56 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:31.056 19:19:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.056 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:31.056 19:19:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.056 19:19:56 -- rpc/rpc.sh@42 -- # info='{ 00:05:31.056 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid61709", 00:05:31.056 "tpoint_group_mask": "0x8", 00:05:31.056 "iscsi_conn": { 00:05:31.056 "mask": "0x2", 00:05:31.056 "tpoint_mask": "0x0" 00:05:31.056 }, 00:05:31.056 "scsi": { 00:05:31.056 "mask": "0x4", 00:05:31.056 "tpoint_mask": "0x0" 00:05:31.056 }, 00:05:31.056 "bdev": { 00:05:31.056 "mask": "0x8", 00:05:31.056 "tpoint_mask": "0xffffffffffffffff" 00:05:31.056 }, 00:05:31.056 "nvmf_rdma": { 00:05:31.056 "mask": "0x10", 00:05:31.056 "tpoint_mask": "0x0" 00:05:31.056 }, 00:05:31.056 "nvmf_tcp": { 00:05:31.056 "mask": "0x20", 00:05:31.056 "tpoint_mask": "0x0" 00:05:31.056 }, 00:05:31.056 "ftl": { 00:05:31.056 "mask": "0x40", 00:05:31.056 "tpoint_mask": "0x0" 00:05:31.056 }, 00:05:31.056 "blobfs": { 00:05:31.056 "mask": "0x80", 00:05:31.056 "tpoint_mask": "0x0" 00:05:31.056 }, 00:05:31.056 "dsa": { 00:05:31.056 "mask": "0x200", 00:05:31.056 "tpoint_mask": "0x0" 00:05:31.056 }, 00:05:31.056 "thread": { 00:05:31.056 "mask": "0x400", 00:05:31.056 "tpoint_mask": "0x0" 00:05:31.056 }, 00:05:31.056 "nvme_pcie": { 00:05:31.056 "mask": "0x800", 00:05:31.056 "tpoint_mask": "0x0" 00:05:31.056 }, 00:05:31.056 "iaa": { 00:05:31.056 "mask": "0x1000", 00:05:31.056 "tpoint_mask": "0x0" 00:05:31.056 }, 00:05:31.056 "nvme_tcp": { 00:05:31.056 "mask": "0x2000", 00:05:31.056 "tpoint_mask": "0x0" 00:05:31.056 }, 00:05:31.056 "bdev_nvme": { 00:05:31.056 "mask": "0x4000", 00:05:31.056 "tpoint_mask": "0x0" 00:05:31.056 }, 00:05:31.056 "sock": { 00:05:31.056 "mask": "0x8000", 00:05:31.056 "tpoint_mask": "0x0" 00:05:31.056 } 00:05:31.056 }' 00:05:31.056 19:19:56 -- rpc/rpc.sh@43 -- # jq length 00:05:31.316 19:19:56 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:31.316 19:19:56 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:31.316 19:19:56 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:31.316 19:19:56 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:31.316 19:19:56 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:31.316 19:19:56 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:31.316 19:19:56 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:31.316 19:19:56 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:31.316 ************************************ 00:05:31.316 END TEST rpc_trace_cmd_test 00:05:31.316 ************************************ 00:05:31.316 19:19:56 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:31.316 00:05:31.316 real 0m0.268s 00:05:31.316 user 0m0.215s 00:05:31.316 sys 0m0.040s 00:05:31.316 19:19:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.316 19:19:56 -- common/autotest_common.sh@10 -- # set +x 00:05:31.574 19:19:57 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:31.574 19:19:57 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:31.574 19:19:57 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:31.575 19:19:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:31.575 19:19:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.575 19:19:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.575 ************************************ 00:05:31.575 START TEST rpc_daemon_integrity 00:05:31.575 ************************************ 00:05:31.575 19:19:57 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:31.575 19:19:57 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:31.575 19:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.575 19:19:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.575 19:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.575 19:19:57 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:31.575 19:19:57 -- rpc/rpc.sh@13 -- # jq length 00:05:31.575 19:19:57 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:31.575 19:19:57 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:31.575 19:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.575 19:19:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.575 19:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.575 19:19:57 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:31.575 19:19:57 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:31.575 19:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.575 19:19:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.575 19:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.575 19:19:57 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:31.575 { 00:05:31.575 "name": "Malloc2", 00:05:31.575 "aliases": [ 00:05:31.575 "dbfccca3-2f5b-4ab6-87a6-b4402ece0c3a" 00:05:31.575 ], 00:05:31.575 "product_name": "Malloc disk", 00:05:31.575 "block_size": 512, 00:05:31.575 "num_blocks": 16384, 00:05:31.575 "uuid": "dbfccca3-2f5b-4ab6-87a6-b4402ece0c3a", 00:05:31.575 "assigned_rate_limits": { 00:05:31.575 "rw_ios_per_sec": 0, 00:05:31.575 "rw_mbytes_per_sec": 0, 00:05:31.575 "r_mbytes_per_sec": 0, 00:05:31.575 "w_mbytes_per_sec": 0 00:05:31.575 }, 00:05:31.575 "claimed": false, 00:05:31.575 "zoned": false, 00:05:31.575 "supported_io_types": { 00:05:31.575 "read": true, 00:05:31.575 "write": true, 00:05:31.575 "unmap": true, 00:05:31.575 "write_zeroes": true, 00:05:31.575 "flush": true, 00:05:31.575 "reset": true, 00:05:31.575 "compare": false, 00:05:31.575 "compare_and_write": false, 00:05:31.575 "abort": true, 00:05:31.575 "nvme_admin": false, 00:05:31.575 "nvme_io": false 00:05:31.575 }, 00:05:31.575 "memory_domains": [ 00:05:31.575 { 00:05:31.575 "dma_device_id": "system", 00:05:31.575 "dma_device_type": 1 00:05:31.575 }, 00:05:31.575 { 00:05:31.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.575 "dma_device_type": 2 00:05:31.575 } 00:05:31.575 ], 00:05:31.575 "driver_specific": {} 00:05:31.575 } 00:05:31.575 ]' 00:05:31.575 19:19:57 -- rpc/rpc.sh@17 -- # jq length 00:05:31.865 19:19:57 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:31.865 19:19:57 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:31.865 19:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.865 19:19:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.865 [2024-04-24 19:19:57.265245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:31.865 [2024-04-24 19:19:57.265325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:31.865 [2024-04-24 19:19:57.265349] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:05:31.865 [2024-04-24 19:19:57.265362] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:31.865 [2024-04-24 19:19:57.267845] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:31.865 [2024-04-24 19:19:57.267895] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:31.865 Passthru0 00:05:31.865 19:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.865 19:19:57 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:31.865 19:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.865 19:19:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.865 19:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.865 19:19:57 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:31.865 { 00:05:31.865 "name": "Malloc2", 00:05:31.865 "aliases": [ 00:05:31.865 "dbfccca3-2f5b-4ab6-87a6-b4402ece0c3a" 00:05:31.865 ], 00:05:31.865 "product_name": "Malloc disk", 00:05:31.865 "block_size": 512, 00:05:31.865 "num_blocks": 16384, 00:05:31.865 "uuid": "dbfccca3-2f5b-4ab6-87a6-b4402ece0c3a", 00:05:31.865 "assigned_rate_limits": { 00:05:31.865 "rw_ios_per_sec": 0, 00:05:31.865 "rw_mbytes_per_sec": 0, 00:05:31.865 "r_mbytes_per_sec": 0, 00:05:31.865 "w_mbytes_per_sec": 0 00:05:31.865 }, 00:05:31.865 "claimed": true, 00:05:31.865 "claim_type": "exclusive_write", 00:05:31.865 "zoned": false, 00:05:31.865 "supported_io_types": { 00:05:31.865 "read": true, 00:05:31.865 "write": true, 00:05:31.865 "unmap": true, 00:05:31.865 "write_zeroes": true, 00:05:31.865 "flush": true, 00:05:31.865 "reset": true, 00:05:31.865 "compare": false, 00:05:31.865 "compare_and_write": false, 00:05:31.865 "abort": true, 00:05:31.865 "nvme_admin": false, 00:05:31.865 "nvme_io": false 00:05:31.865 }, 00:05:31.865 "memory_domains": [ 00:05:31.865 { 00:05:31.865 "dma_device_id": "system", 00:05:31.865 "dma_device_type": 1 00:05:31.865 }, 00:05:31.865 { 00:05:31.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.865 "dma_device_type": 2 00:05:31.865 } 00:05:31.865 ], 00:05:31.865 "driver_specific": {} 00:05:31.865 }, 00:05:31.865 { 00:05:31.865 "name": "Passthru0", 00:05:31.865 "aliases": [ 00:05:31.865 "aedd2859-30e2-5f2f-abdd-cf8eee037066" 00:05:31.865 ], 00:05:31.865 "product_name": "passthru", 00:05:31.865 "block_size": 512, 00:05:31.865 "num_blocks": 16384, 00:05:31.865 "uuid": "aedd2859-30e2-5f2f-abdd-cf8eee037066", 00:05:31.865 "assigned_rate_limits": { 00:05:31.865 "rw_ios_per_sec": 0, 00:05:31.865 "rw_mbytes_per_sec": 0, 00:05:31.865 "r_mbytes_per_sec": 0, 00:05:31.865 "w_mbytes_per_sec": 0 00:05:31.865 }, 00:05:31.865 "claimed": false, 00:05:31.865 "zoned": false, 00:05:31.865 "supported_io_types": { 00:05:31.865 "read": true, 00:05:31.865 "write": true, 00:05:31.865 "unmap": true, 00:05:31.865 "write_zeroes": true, 00:05:31.865 "flush": true, 00:05:31.865 "reset": true, 00:05:31.865 "compare": false, 00:05:31.865 "compare_and_write": false, 00:05:31.865 "abort": true, 00:05:31.865 "nvme_admin": false, 00:05:31.865 "nvme_io": false 00:05:31.865 }, 00:05:31.865 "memory_domains": [ 00:05:31.865 { 00:05:31.865 "dma_device_id": "system", 00:05:31.865 "dma_device_type": 1 00:05:31.865 }, 00:05:31.865 { 00:05:31.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:31.865 "dma_device_type": 2 00:05:31.865 } 00:05:31.865 ], 00:05:31.865 "driver_specific": { 00:05:31.865 "passthru": { 00:05:31.865 "name": "Passthru0", 00:05:31.865 "base_bdev_name": "Malloc2" 00:05:31.865 } 00:05:31.865 } 00:05:31.865 } 00:05:31.865 ]' 00:05:31.865 19:19:57 -- rpc/rpc.sh@21 -- # jq length 00:05:31.865 19:19:57 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:31.865 19:19:57 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:31.865 19:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.865 19:19:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.865 19:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.865 19:19:57 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:31.865 19:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.865 19:19:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.865 19:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.865 19:19:57 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:31.865 19:19:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:31.865 19:19:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.865 19:19:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:31.866 19:19:57 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:31.866 19:19:57 -- rpc/rpc.sh@26 -- # jq length 00:05:31.866 ************************************ 00:05:31.866 END TEST rpc_daemon_integrity 00:05:31.866 ************************************ 00:05:31.866 19:19:57 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:31.866 00:05:31.866 real 0m0.360s 00:05:31.866 user 0m0.197s 00:05:31.866 sys 0m0.053s 00:05:31.866 19:19:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.866 19:19:57 -- common/autotest_common.sh@10 -- # set +x 00:05:31.866 19:19:57 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:31.866 19:19:57 -- rpc/rpc.sh@84 -- # killprocess 61709 00:05:31.866 19:19:57 -- common/autotest_common.sh@936 -- # '[' -z 61709 ']' 00:05:31.866 19:19:57 -- common/autotest_common.sh@940 -- # kill -0 61709 00:05:31.866 19:19:57 -- common/autotest_common.sh@941 -- # uname 00:05:31.866 19:19:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:31.866 19:19:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61709 00:05:32.124 killing process with pid 61709 00:05:32.125 19:19:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:32.125 19:19:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:32.125 19:19:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61709' 00:05:32.125 19:19:57 -- common/autotest_common.sh@955 -- # kill 61709 00:05:32.125 19:19:57 -- common/autotest_common.sh@960 -- # wait 61709 00:05:34.662 00:05:34.662 real 0m5.909s 00:05:34.662 user 0m6.632s 00:05:34.662 sys 0m1.012s 00:05:34.662 ************************************ 00:05:34.662 END TEST rpc 00:05:34.662 ************************************ 00:05:34.662 19:20:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.662 19:20:00 -- common/autotest_common.sh@10 -- # set +x 00:05:34.662 19:20:00 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:34.662 19:20:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.662 19:20:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.662 19:20:00 -- common/autotest_common.sh@10 -- # set +x 00:05:34.662 ************************************ 00:05:34.662 START TEST skip_rpc 00:05:34.662 ************************************ 00:05:34.662 19:20:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:34.922 * Looking for test storage... 00:05:34.922 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:34.922 19:20:00 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:34.922 19:20:00 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:34.922 19:20:00 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:34.922 19:20:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.923 19:20:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.923 19:20:00 -- common/autotest_common.sh@10 -- # set +x 00:05:34.923 ************************************ 00:05:34.923 START TEST skip_rpc 00:05:34.923 ************************************ 00:05:34.923 19:20:00 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:05:34.923 19:20:00 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=61968 00:05:34.923 19:20:00 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:34.923 19:20:00 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.923 19:20:00 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:34.923 [2024-04-24 19:20:00.579966] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:34.923 [2024-04-24 19:20:00.580165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61968 ] 00:05:35.182 [2024-04-24 19:20:00.746719] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.442 [2024-04-24 19:20:00.997406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.714 19:20:05 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:40.714 19:20:05 -- common/autotest_common.sh@638 -- # local es=0 00:05:40.714 19:20:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:40.714 19:20:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:40.714 19:20:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:40.714 19:20:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:40.714 19:20:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:40.714 19:20:05 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:40.714 19:20:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:40.714 19:20:05 -- common/autotest_common.sh@10 -- # set +x 00:05:40.714 19:20:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:40.714 19:20:05 -- common/autotest_common.sh@641 -- # es=1 00:05:40.714 19:20:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:40.714 19:20:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:40.714 19:20:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:40.714 19:20:05 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:40.714 19:20:05 -- rpc/skip_rpc.sh@23 -- # killprocess 61968 00:05:40.714 19:20:05 -- common/autotest_common.sh@936 -- # '[' -z 61968 ']' 00:05:40.714 19:20:05 -- common/autotest_common.sh@940 -- # kill -0 61968 00:05:40.714 19:20:05 -- common/autotest_common.sh@941 -- # uname 00:05:40.714 19:20:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.714 19:20:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61968 00:05:40.714 19:20:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.714 killing process with pid 61968 00:05:40.714 19:20:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.714 19:20:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61968' 00:05:40.714 19:20:05 -- common/autotest_common.sh@955 -- # kill 61968 00:05:40.714 19:20:05 -- common/autotest_common.sh@960 -- # wait 61968 00:05:42.632 00:05:42.632 real 0m7.634s 00:05:42.632 user 0m7.171s 00:05:42.632 sys 0m0.373s 00:05:42.632 19:20:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.632 ************************************ 00:05:42.632 END TEST skip_rpc 00:05:42.632 ************************************ 00:05:42.632 19:20:08 -- common/autotest_common.sh@10 -- # set +x 00:05:42.632 19:20:08 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:42.632 19:20:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.632 19:20:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.632 19:20:08 -- common/autotest_common.sh@10 -- # set +x 00:05:42.632 ************************************ 00:05:42.632 START TEST skip_rpc_with_json 00:05:42.632 ************************************ 00:05:42.632 19:20:08 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:42.632 19:20:08 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:42.632 19:20:08 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=62083 00:05:42.632 19:20:08 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.632 19:20:08 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.632 19:20:08 -- rpc/skip_rpc.sh@31 -- # waitforlisten 62083 00:05:42.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.632 19:20:08 -- common/autotest_common.sh@817 -- # '[' -z 62083 ']' 00:05:42.632 19:20:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.632 19:20:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:42.632 19:20:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.632 19:20:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:42.632 19:20:08 -- common/autotest_common.sh@10 -- # set +x 00:05:42.890 [2024-04-24 19:20:08.340969] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:42.891 [2024-04-24 19:20:08.341089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62083 ] 00:05:42.891 [2024-04-24 19:20:08.510999] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.149 [2024-04-24 19:20:08.758792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.085 19:20:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:44.085 19:20:09 -- common/autotest_common.sh@850 -- # return 0 00:05:44.085 19:20:09 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:44.085 19:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:44.085 19:20:09 -- common/autotest_common.sh@10 -- # set +x 00:05:44.344 [2024-04-24 19:20:09.765240] nvmf_rpc.c:2509:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:44.344 request: 00:05:44.344 { 00:05:44.344 "trtype": "tcp", 00:05:44.344 "method": "nvmf_get_transports", 00:05:44.344 "req_id": 1 00:05:44.344 } 00:05:44.344 Got JSON-RPC error response 00:05:44.344 response: 00:05:44.344 { 00:05:44.344 "code": -19, 00:05:44.344 "message": "No such device" 00:05:44.344 } 00:05:44.344 19:20:09 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:44.344 19:20:09 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:44.344 19:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:44.344 19:20:09 -- common/autotest_common.sh@10 -- # set +x 00:05:44.344 [2024-04-24 19:20:09.781288] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.344 19:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:44.344 19:20:09 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:44.344 19:20:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:44.344 19:20:09 -- common/autotest_common.sh@10 -- # set +x 00:05:44.344 19:20:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:44.344 19:20:09 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:44.344 { 00:05:44.345 "subsystems": [ 00:05:44.345 { 00:05:44.345 "subsystem": "keyring", 00:05:44.345 "config": [] 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "subsystem": "iobuf", 00:05:44.345 "config": [ 00:05:44.345 { 00:05:44.345 "method": "iobuf_set_options", 00:05:44.345 "params": { 00:05:44.345 "small_pool_count": 8192, 00:05:44.345 "large_pool_count": 1024, 00:05:44.345 "small_bufsize": 8192, 00:05:44.345 "large_bufsize": 135168 00:05:44.345 } 00:05:44.345 } 00:05:44.345 ] 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "subsystem": "sock", 00:05:44.345 "config": [ 00:05:44.345 { 00:05:44.345 "method": "sock_impl_set_options", 00:05:44.345 "params": { 00:05:44.345 "impl_name": "posix", 00:05:44.345 "recv_buf_size": 2097152, 00:05:44.345 "send_buf_size": 2097152, 00:05:44.345 "enable_recv_pipe": true, 00:05:44.345 "enable_quickack": false, 00:05:44.345 "enable_placement_id": 0, 00:05:44.345 "enable_zerocopy_send_server": true, 00:05:44.345 "enable_zerocopy_send_client": false, 00:05:44.345 "zerocopy_threshold": 0, 00:05:44.345 "tls_version": 0, 00:05:44.345 "enable_ktls": false 00:05:44.345 } 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "method": "sock_impl_set_options", 00:05:44.345 "params": { 00:05:44.345 "impl_name": "ssl", 00:05:44.345 "recv_buf_size": 4096, 00:05:44.345 "send_buf_size": 4096, 00:05:44.345 "enable_recv_pipe": true, 00:05:44.345 "enable_quickack": false, 00:05:44.345 "enable_placement_id": 0, 00:05:44.345 "enable_zerocopy_send_server": true, 00:05:44.345 "enable_zerocopy_send_client": false, 00:05:44.345 "zerocopy_threshold": 0, 00:05:44.345 "tls_version": 0, 00:05:44.345 "enable_ktls": false 00:05:44.345 } 00:05:44.345 } 00:05:44.345 ] 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "subsystem": "vmd", 00:05:44.345 "config": [] 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "subsystem": "accel", 00:05:44.345 "config": [ 00:05:44.345 { 00:05:44.345 "method": "accel_set_options", 00:05:44.345 "params": { 00:05:44.345 "small_cache_size": 128, 00:05:44.345 "large_cache_size": 16, 00:05:44.345 "task_count": 2048, 00:05:44.345 "sequence_count": 2048, 00:05:44.345 "buf_count": 2048 00:05:44.345 } 00:05:44.345 } 00:05:44.345 ] 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "subsystem": "bdev", 00:05:44.345 "config": [ 00:05:44.345 { 00:05:44.345 "method": "bdev_set_options", 00:05:44.345 "params": { 00:05:44.345 "bdev_io_pool_size": 65535, 00:05:44.345 "bdev_io_cache_size": 256, 00:05:44.345 "bdev_auto_examine": true, 00:05:44.345 "iobuf_small_cache_size": 128, 00:05:44.345 "iobuf_large_cache_size": 16 00:05:44.345 } 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "method": "bdev_raid_set_options", 00:05:44.345 "params": { 00:05:44.345 "process_window_size_kb": 1024 00:05:44.345 } 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "method": "bdev_iscsi_set_options", 00:05:44.345 "params": { 00:05:44.345 "timeout_sec": 30 00:05:44.345 } 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "method": "bdev_nvme_set_options", 00:05:44.345 "params": { 00:05:44.345 "action_on_timeout": "none", 00:05:44.345 "timeout_us": 0, 00:05:44.345 "timeout_admin_us": 0, 00:05:44.345 "keep_alive_timeout_ms": 10000, 00:05:44.345 "arbitration_burst": 0, 00:05:44.345 "low_priority_weight": 0, 00:05:44.345 "medium_priority_weight": 0, 00:05:44.345 "high_priority_weight": 0, 00:05:44.345 "nvme_adminq_poll_period_us": 10000, 00:05:44.345 "nvme_ioq_poll_period_us": 0, 00:05:44.345 "io_queue_requests": 0, 00:05:44.345 "delay_cmd_submit": true, 00:05:44.345 "transport_retry_count": 4, 00:05:44.345 "bdev_retry_count": 3, 00:05:44.345 "transport_ack_timeout": 0, 00:05:44.345 "ctrlr_loss_timeout_sec": 0, 00:05:44.345 "reconnect_delay_sec": 0, 00:05:44.345 "fast_io_fail_timeout_sec": 0, 00:05:44.345 "disable_auto_failback": false, 00:05:44.345 "generate_uuids": false, 00:05:44.345 "transport_tos": 0, 00:05:44.345 "nvme_error_stat": false, 00:05:44.345 "rdma_srq_size": 0, 00:05:44.345 "io_path_stat": false, 00:05:44.345 "allow_accel_sequence": false, 00:05:44.345 "rdma_max_cq_size": 0, 00:05:44.345 "rdma_cm_event_timeout_ms": 0, 00:05:44.345 "dhchap_digests": [ 00:05:44.345 "sha256", 00:05:44.345 "sha384", 00:05:44.345 "sha512" 00:05:44.345 ], 00:05:44.345 "dhchap_dhgroups": [ 00:05:44.345 "null", 00:05:44.345 "ffdhe2048", 00:05:44.345 "ffdhe3072", 00:05:44.345 "ffdhe4096", 00:05:44.345 "ffdhe6144", 00:05:44.345 "ffdhe8192" 00:05:44.345 ] 00:05:44.345 } 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "method": "bdev_nvme_set_hotplug", 00:05:44.345 "params": { 00:05:44.345 "period_us": 100000, 00:05:44.345 "enable": false 00:05:44.345 } 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "method": "bdev_wait_for_examine" 00:05:44.345 } 00:05:44.345 ] 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "subsystem": "scsi", 00:05:44.345 "config": null 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "subsystem": "scheduler", 00:05:44.345 "config": [ 00:05:44.345 { 00:05:44.345 "method": "framework_set_scheduler", 00:05:44.345 "params": { 00:05:44.345 "name": "static" 00:05:44.345 } 00:05:44.345 } 00:05:44.345 ] 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "subsystem": "vhost_scsi", 00:05:44.345 "config": [] 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "subsystem": "vhost_blk", 00:05:44.345 "config": [] 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "subsystem": "ublk", 00:05:44.345 "config": [] 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "subsystem": "nbd", 00:05:44.345 "config": [] 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "subsystem": "nvmf", 00:05:44.345 "config": [ 00:05:44.345 { 00:05:44.345 "method": "nvmf_set_config", 00:05:44.345 "params": { 00:05:44.345 "discovery_filter": "match_any", 00:05:44.345 "admin_cmd_passthru": { 00:05:44.345 "identify_ctrlr": false 00:05:44.345 } 00:05:44.345 } 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "method": "nvmf_set_max_subsystems", 00:05:44.345 "params": { 00:05:44.345 "max_subsystems": 1024 00:05:44.345 } 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "method": "nvmf_set_crdt", 00:05:44.345 "params": { 00:05:44.345 "crdt1": 0, 00:05:44.345 "crdt2": 0, 00:05:44.345 "crdt3": 0 00:05:44.345 } 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "method": "nvmf_create_transport", 00:05:44.345 "params": { 00:05:44.345 "trtype": "TCP", 00:05:44.345 "max_queue_depth": 128, 00:05:44.345 "max_io_qpairs_per_ctrlr": 127, 00:05:44.345 "in_capsule_data_size": 4096, 00:05:44.345 "max_io_size": 131072, 00:05:44.345 "io_unit_size": 131072, 00:05:44.345 "max_aq_depth": 128, 00:05:44.345 "num_shared_buffers": 511, 00:05:44.345 "buf_cache_size": 4294967295, 00:05:44.345 "dif_insert_or_strip": false, 00:05:44.345 "zcopy": false, 00:05:44.345 "c2h_success": true, 00:05:44.345 "sock_priority": 0, 00:05:44.345 "abort_timeout_sec": 1, 00:05:44.345 "ack_timeout": 0 00:05:44.345 } 00:05:44.345 } 00:05:44.345 ] 00:05:44.345 }, 00:05:44.345 { 00:05:44.345 "subsystem": "iscsi", 00:05:44.345 "config": [ 00:05:44.345 { 00:05:44.345 "method": "iscsi_set_options", 00:05:44.345 "params": { 00:05:44.345 "node_base": "iqn.2016-06.io.spdk", 00:05:44.345 "max_sessions": 128, 00:05:44.345 "max_connections_per_session": 2, 00:05:44.345 "max_queue_depth": 64, 00:05:44.345 "default_time2wait": 2, 00:05:44.345 "default_time2retain": 20, 00:05:44.345 "first_burst_length": 8192, 00:05:44.345 "immediate_data": true, 00:05:44.345 "allow_duplicated_isid": false, 00:05:44.345 "error_recovery_level": 0, 00:05:44.345 "nop_timeout": 60, 00:05:44.345 "nop_in_interval": 30, 00:05:44.345 "disable_chap": false, 00:05:44.345 "require_chap": false, 00:05:44.345 "mutual_chap": false, 00:05:44.345 "chap_group": 0, 00:05:44.345 "max_large_datain_per_connection": 64, 00:05:44.345 "max_r2t_per_connection": 4, 00:05:44.345 "pdu_pool_size": 36864, 00:05:44.345 "immediate_data_pool_size": 16384, 00:05:44.345 "data_out_pool_size": 2048 00:05:44.345 } 00:05:44.345 } 00:05:44.345 ] 00:05:44.345 } 00:05:44.345 ] 00:05:44.345 } 00:05:44.345 19:20:09 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:44.345 19:20:09 -- rpc/skip_rpc.sh@40 -- # killprocess 62083 00:05:44.345 19:20:09 -- common/autotest_common.sh@936 -- # '[' -z 62083 ']' 00:05:44.345 19:20:09 -- common/autotest_common.sh@940 -- # kill -0 62083 00:05:44.345 19:20:09 -- common/autotest_common.sh@941 -- # uname 00:05:44.345 19:20:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:44.345 19:20:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62083 00:05:44.345 19:20:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:44.345 19:20:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:44.345 19:20:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62083' 00:05:44.345 killing process with pid 62083 00:05:44.345 19:20:09 -- common/autotest_common.sh@955 -- # kill 62083 00:05:44.345 19:20:09 -- common/autotest_common.sh@960 -- # wait 62083 00:05:47.635 19:20:12 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=62139 00:05:47.635 19:20:12 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:47.635 19:20:12 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:52.975 19:20:17 -- rpc/skip_rpc.sh@50 -- # killprocess 62139 00:05:52.975 19:20:17 -- common/autotest_common.sh@936 -- # '[' -z 62139 ']' 00:05:52.975 19:20:17 -- common/autotest_common.sh@940 -- # kill -0 62139 00:05:52.975 19:20:17 -- common/autotest_common.sh@941 -- # uname 00:05:52.975 19:20:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:52.975 19:20:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62139 00:05:52.975 19:20:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:52.975 19:20:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:52.975 19:20:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62139' 00:05:52.975 killing process with pid 62139 00:05:52.975 19:20:17 -- common/autotest_common.sh@955 -- # kill 62139 00:05:52.975 19:20:17 -- common/autotest_common.sh@960 -- # wait 62139 00:05:54.883 19:20:20 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:54.883 19:20:20 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:54.883 00:05:54.883 real 0m12.106s 00:05:54.883 user 0m11.576s 00:05:54.883 sys 0m0.786s 00:05:54.883 ************************************ 00:05:54.883 END TEST skip_rpc_with_json 00:05:54.883 ************************************ 00:05:54.883 19:20:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:54.883 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:05:54.883 19:20:20 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:54.883 19:20:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:54.883 19:20:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:54.883 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:05:54.883 ************************************ 00:05:54.883 START TEST skip_rpc_with_delay 00:05:54.883 ************************************ 00:05:54.883 19:20:20 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:54.883 19:20:20 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.883 19:20:20 -- common/autotest_common.sh@638 -- # local es=0 00:05:54.883 19:20:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:54.883 19:20:20 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.883 19:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:54.883 19:20:20 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.883 19:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:54.883 19:20:20 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.883 19:20:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:54.883 19:20:20 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.883 19:20:20 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:54.883 19:20:20 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:55.142 [2024-04-24 19:20:20.597192] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:55.142 [2024-04-24 19:20:20.597308] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:55.142 ************************************ 00:05:55.142 END TEST skip_rpc_with_delay 00:05:55.142 ************************************ 00:05:55.142 19:20:20 -- common/autotest_common.sh@641 -- # es=1 00:05:55.142 19:20:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:55.142 19:20:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:55.142 19:20:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:55.142 00:05:55.142 real 0m0.188s 00:05:55.142 user 0m0.100s 00:05:55.142 sys 0m0.086s 00:05:55.142 19:20:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.142 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:05:55.142 19:20:20 -- rpc/skip_rpc.sh@77 -- # uname 00:05:55.142 19:20:20 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:55.142 19:20:20 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:55.142 19:20:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.142 19:20:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.142 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:05:55.142 ************************************ 00:05:55.142 START TEST exit_on_failed_rpc_init 00:05:55.142 ************************************ 00:05:55.142 19:20:20 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:55.142 19:20:20 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62286 00:05:55.142 19:20:20 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.142 19:20:20 -- rpc/skip_rpc.sh@63 -- # waitforlisten 62286 00:05:55.142 19:20:20 -- common/autotest_common.sh@817 -- # '[' -z 62286 ']' 00:05:55.142 19:20:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.142 19:20:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:55.142 19:20:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.142 19:20:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:55.142 19:20:20 -- common/autotest_common.sh@10 -- # set +x 00:05:55.401 [2024-04-24 19:20:20.901303] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:55.401 [2024-04-24 19:20:20.901491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62286 ] 00:05:55.401 [2024-04-24 19:20:21.063529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.660 [2024-04-24 19:20:21.336920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.047 19:20:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:57.047 19:20:22 -- common/autotest_common.sh@850 -- # return 0 00:05:57.047 19:20:22 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.047 19:20:22 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.047 19:20:22 -- common/autotest_common.sh@638 -- # local es=0 00:05:57.047 19:20:22 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.047 19:20:22 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.047 19:20:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:57.047 19:20:22 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.047 19:20:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:57.047 19:20:22 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.047 19:20:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:57.047 19:20:22 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:57.047 19:20:22 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:57.047 19:20:22 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:57.047 [2024-04-24 19:20:22.533037] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:05:57.047 [2024-04-24 19:20:22.533262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62315 ] 00:05:57.047 [2024-04-24 19:20:22.701093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.303 [2024-04-24 19:20:22.971598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.303 [2024-04-24 19:20:22.971819] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:57.304 [2024-04-24 19:20:22.971879] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:57.304 [2024-04-24 19:20:22.971919] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:57.868 19:20:23 -- common/autotest_common.sh@641 -- # es=234 00:05:57.868 19:20:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:57.868 19:20:23 -- common/autotest_common.sh@650 -- # es=106 00:05:57.868 19:20:23 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:57.868 19:20:23 -- common/autotest_common.sh@658 -- # es=1 00:05:57.868 19:20:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:57.868 19:20:23 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:57.868 19:20:23 -- rpc/skip_rpc.sh@70 -- # killprocess 62286 00:05:57.868 19:20:23 -- common/autotest_common.sh@936 -- # '[' -z 62286 ']' 00:05:57.868 19:20:23 -- common/autotest_common.sh@940 -- # kill -0 62286 00:05:57.868 19:20:23 -- common/autotest_common.sh@941 -- # uname 00:05:57.868 19:20:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.868 19:20:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62286 00:05:57.868 killing process with pid 62286 00:05:57.868 19:20:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.868 19:20:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.868 19:20:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62286' 00:05:57.868 19:20:23 -- common/autotest_common.sh@955 -- # kill 62286 00:05:57.868 19:20:23 -- common/autotest_common.sh@960 -- # wait 62286 00:06:01.152 ************************************ 00:06:01.152 END TEST exit_on_failed_rpc_init 00:06:01.152 ************************************ 00:06:01.152 00:06:01.152 real 0m5.483s 00:06:01.152 user 0m6.198s 00:06:01.152 sys 0m0.556s 00:06:01.152 19:20:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.152 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:06:01.152 19:20:26 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:01.152 00:06:01.152 real 0m26.056s 00:06:01.152 user 0m25.262s 00:06:01.152 sys 0m2.191s 00:06:01.152 19:20:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.152 ************************************ 00:06:01.152 END TEST skip_rpc 00:06:01.152 ************************************ 00:06:01.152 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:06:01.152 19:20:26 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:01.152 19:20:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.152 19:20:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.152 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:06:01.152 ************************************ 00:06:01.152 START TEST rpc_client 00:06:01.152 ************************************ 00:06:01.152 19:20:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:01.152 * Looking for test storage... 00:06:01.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:01.152 19:20:26 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:01.152 OK 00:06:01.152 19:20:26 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:01.152 00:06:01.152 real 0m0.161s 00:06:01.152 user 0m0.067s 00:06:01.152 sys 0m0.099s 00:06:01.152 19:20:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.152 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:06:01.152 ************************************ 00:06:01.152 END TEST rpc_client 00:06:01.152 ************************************ 00:06:01.152 19:20:26 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:01.152 19:20:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.152 19:20:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.152 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:06:01.152 ************************************ 00:06:01.152 START TEST json_config 00:06:01.152 ************************************ 00:06:01.152 19:20:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:01.413 19:20:26 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:01.413 19:20:26 -- nvmf/common.sh@7 -- # uname -s 00:06:01.413 19:20:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.413 19:20:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.413 19:20:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.413 19:20:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.413 19:20:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.413 19:20:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.413 19:20:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.413 19:20:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.413 19:20:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.413 19:20:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.413 19:20:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:84fe9573-d922-4396-b597-209883f76b96 00:06:01.413 19:20:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=84fe9573-d922-4396-b597-209883f76b96 00:06:01.413 19:20:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.413 19:20:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.413 19:20:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:01.413 19:20:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.413 19:20:26 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:01.413 19:20:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.413 19:20:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.413 19:20:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.413 19:20:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.413 19:20:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.413 19:20:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.413 19:20:26 -- paths/export.sh@5 -- # export PATH 00:06:01.413 19:20:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.413 19:20:26 -- nvmf/common.sh@47 -- # : 0 00:06:01.413 19:20:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:01.413 19:20:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:01.413 19:20:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.413 19:20:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.413 19:20:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.413 19:20:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:01.413 19:20:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:01.413 19:20:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:01.413 19:20:26 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:01.413 19:20:26 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:01.413 19:20:26 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:01.413 WARNING: No tests are enabled so not running JSON configuration tests 00:06:01.413 19:20:26 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:01.413 19:20:26 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:01.413 19:20:26 -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:01.413 19:20:26 -- json_config/json_config.sh@28 -- # exit 0 00:06:01.413 00:06:01.413 real 0m0.123s 00:06:01.413 user 0m0.054s 00:06:01.413 sys 0m0.067s 00:06:01.413 ************************************ 00:06:01.413 END TEST json_config 00:06:01.413 ************************************ 00:06:01.413 19:20:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.413 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:06:01.413 19:20:26 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:01.413 19:20:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.413 19:20:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.413 19:20:26 -- common/autotest_common.sh@10 -- # set +x 00:06:01.413 ************************************ 00:06:01.413 START TEST json_config_extra_key 00:06:01.413 ************************************ 00:06:01.413 19:20:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:01.413 19:20:27 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:01.413 19:20:27 -- nvmf/common.sh@7 -- # uname -s 00:06:01.413 19:20:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.413 19:20:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.413 19:20:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.413 19:20:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.413 19:20:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.413 19:20:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.413 19:20:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.413 19:20:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.413 19:20:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.413 19:20:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.413 19:20:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:84fe9573-d922-4396-b597-209883f76b96 00:06:01.413 19:20:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=84fe9573-d922-4396-b597-209883f76b96 00:06:01.413 19:20:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.413 19:20:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.413 19:20:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:01.413 19:20:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.413 19:20:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:01.413 19:20:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.413 19:20:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.413 19:20:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.413 19:20:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.413 19:20:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.413 19:20:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.413 19:20:27 -- paths/export.sh@5 -- # export PATH 00:06:01.413 19:20:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.413 19:20:27 -- nvmf/common.sh@47 -- # : 0 00:06:01.413 19:20:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:01.413 19:20:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:01.413 19:20:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.413 19:20:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.413 19:20:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.413 19:20:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:01.413 19:20:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:01.413 19:20:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:01.413 19:20:27 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:01.414 19:20:27 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:01.414 19:20:27 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:01.414 19:20:27 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:01.414 19:20:27 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:01.414 19:20:27 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:01.414 19:20:27 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:01.414 INFO: launching applications... 00:06:01.414 Waiting for target to run... 00:06:01.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:01.414 19:20:27 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:01.414 19:20:27 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:01.414 19:20:27 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:01.414 19:20:27 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:01.414 19:20:27 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:01.414 19:20:27 -- json_config/common.sh@9 -- # local app=target 00:06:01.414 19:20:27 -- json_config/common.sh@10 -- # shift 00:06:01.414 19:20:27 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:01.414 19:20:27 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:01.414 19:20:27 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:01.414 19:20:27 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.414 19:20:27 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.414 19:20:27 -- json_config/common.sh@22 -- # app_pid["$app"]=62516 00:06:01.414 19:20:27 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:01.414 19:20:27 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:01.414 19:20:27 -- json_config/common.sh@25 -- # waitforlisten 62516 /var/tmp/spdk_tgt.sock 00:06:01.414 19:20:27 -- common/autotest_common.sh@817 -- # '[' -z 62516 ']' 00:06:01.414 19:20:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:01.414 19:20:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:01.414 19:20:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:01.414 19:20:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:01.414 19:20:27 -- common/autotest_common.sh@10 -- # set +x 00:06:01.673 [2024-04-24 19:20:27.184424] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:01.673 [2024-04-24 19:20:27.184671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62516 ] 00:06:01.932 [2024-04-24 19:20:27.572274] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.192 [2024-04-24 19:20:27.821983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.128 19:20:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:03.128 19:20:28 -- common/autotest_common.sh@850 -- # return 0 00:06:03.128 19:20:28 -- json_config/common.sh@26 -- # echo '' 00:06:03.128 00:06:03.128 19:20:28 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:03.128 INFO: shutting down applications... 00:06:03.128 19:20:28 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:03.128 19:20:28 -- json_config/common.sh@31 -- # local app=target 00:06:03.128 19:20:28 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:03.128 19:20:28 -- json_config/common.sh@35 -- # [[ -n 62516 ]] 00:06:03.128 19:20:28 -- json_config/common.sh@38 -- # kill -SIGINT 62516 00:06:03.128 19:20:28 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:03.128 19:20:28 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.128 19:20:28 -- json_config/common.sh@41 -- # kill -0 62516 00:06:03.128 19:20:28 -- json_config/common.sh@45 -- # sleep 0.5 00:06:03.695 19:20:29 -- json_config/common.sh@40 -- # (( i++ )) 00:06:03.695 19:20:29 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.695 19:20:29 -- json_config/common.sh@41 -- # kill -0 62516 00:06:03.695 19:20:29 -- json_config/common.sh@45 -- # sleep 0.5 00:06:04.262 19:20:29 -- json_config/common.sh@40 -- # (( i++ )) 00:06:04.262 19:20:29 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.262 19:20:29 -- json_config/common.sh@41 -- # kill -0 62516 00:06:04.262 19:20:29 -- json_config/common.sh@45 -- # sleep 0.5 00:06:04.829 19:20:30 -- json_config/common.sh@40 -- # (( i++ )) 00:06:04.829 19:20:30 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:04.829 19:20:30 -- json_config/common.sh@41 -- # kill -0 62516 00:06:04.829 19:20:30 -- json_config/common.sh@45 -- # sleep 0.5 00:06:05.397 19:20:30 -- json_config/common.sh@40 -- # (( i++ )) 00:06:05.397 19:20:30 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.397 19:20:30 -- json_config/common.sh@41 -- # kill -0 62516 00:06:05.397 19:20:30 -- json_config/common.sh@45 -- # sleep 0.5 00:06:05.656 19:20:31 -- json_config/common.sh@40 -- # (( i++ )) 00:06:05.656 19:20:31 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:05.656 19:20:31 -- json_config/common.sh@41 -- # kill -0 62516 00:06:05.656 19:20:31 -- json_config/common.sh@45 -- # sleep 0.5 00:06:06.223 19:20:31 -- json_config/common.sh@40 -- # (( i++ )) 00:06:06.223 19:20:31 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.223 19:20:31 -- json_config/common.sh@41 -- # kill -0 62516 00:06:06.223 19:20:31 -- json_config/common.sh@45 -- # sleep 0.5 00:06:06.790 19:20:32 -- json_config/common.sh@40 -- # (( i++ )) 00:06:06.790 19:20:32 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:06.790 19:20:32 -- json_config/common.sh@41 -- # kill -0 62516 00:06:06.790 19:20:32 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:06.790 19:20:32 -- json_config/common.sh@43 -- # break 00:06:06.790 19:20:32 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:06.790 19:20:32 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:06.790 SPDK target shutdown done 00:06:06.790 19:20:32 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:06.790 Success 00:06:06.790 00:06:06.790 real 0m5.314s 00:06:06.790 user 0m4.821s 00:06:06.790 sys 0m0.510s 00:06:06.790 19:20:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.790 19:20:32 -- common/autotest_common.sh@10 -- # set +x 00:06:06.790 ************************************ 00:06:06.790 END TEST json_config_extra_key 00:06:06.790 ************************************ 00:06:06.790 19:20:32 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:06.790 19:20:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.790 19:20:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.790 19:20:32 -- common/autotest_common.sh@10 -- # set +x 00:06:06.790 ************************************ 00:06:06.790 START TEST alias_rpc 00:06:06.790 ************************************ 00:06:06.790 19:20:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:07.049 * Looking for test storage... 00:06:07.049 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:07.049 19:20:32 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:07.049 19:20:32 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:07.049 19:20:32 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62637 00:06:07.049 19:20:32 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62637 00:06:07.049 19:20:32 -- common/autotest_common.sh@817 -- # '[' -z 62637 ']' 00:06:07.049 19:20:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.049 19:20:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:07.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.049 19:20:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.049 19:20:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:07.049 19:20:32 -- common/autotest_common.sh@10 -- # set +x 00:06:07.049 [2024-04-24 19:20:32.628312] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:07.049 [2024-04-24 19:20:32.628444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62637 ] 00:06:07.309 [2024-04-24 19:20:32.799625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.572 [2024-04-24 19:20:33.067645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.507 19:20:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:08.507 19:20:34 -- common/autotest_common.sh@850 -- # return 0 00:06:08.507 19:20:34 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:08.767 19:20:34 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62637 00:06:08.767 19:20:34 -- common/autotest_common.sh@936 -- # '[' -z 62637 ']' 00:06:08.767 19:20:34 -- common/autotest_common.sh@940 -- # kill -0 62637 00:06:08.767 19:20:34 -- common/autotest_common.sh@941 -- # uname 00:06:08.767 19:20:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.767 19:20:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62637 00:06:08.767 killing process with pid 62637 00:06:08.767 19:20:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.767 19:20:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.767 19:20:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62637' 00:06:08.767 19:20:34 -- common/autotest_common.sh@955 -- # kill 62637 00:06:08.767 19:20:34 -- common/autotest_common.sh@960 -- # wait 62637 00:06:12.076 ************************************ 00:06:12.076 END TEST alias_rpc 00:06:12.076 ************************************ 00:06:12.076 00:06:12.076 real 0m4.743s 00:06:12.076 user 0m4.777s 00:06:12.076 sys 0m0.526s 00:06:12.076 19:20:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.076 19:20:37 -- common/autotest_common.sh@10 -- # set +x 00:06:12.076 19:20:37 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:12.076 19:20:37 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:12.076 19:20:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.076 19:20:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.076 19:20:37 -- common/autotest_common.sh@10 -- # set +x 00:06:12.076 ************************************ 00:06:12.076 START TEST spdkcli_tcp 00:06:12.076 ************************************ 00:06:12.076 19:20:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:12.076 * Looking for test storage... 00:06:12.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:12.076 19:20:37 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:12.076 19:20:37 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:12.076 19:20:37 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:12.076 19:20:37 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:12.076 19:20:37 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:12.076 19:20:37 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:12.076 19:20:37 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:12.076 19:20:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:12.076 19:20:37 -- common/autotest_common.sh@10 -- # set +x 00:06:12.076 19:20:37 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62746 00:06:12.076 19:20:37 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:12.076 19:20:37 -- spdkcli/tcp.sh@27 -- # waitforlisten 62746 00:06:12.076 19:20:37 -- common/autotest_common.sh@817 -- # '[' -z 62746 ']' 00:06:12.076 19:20:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.076 19:20:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:12.076 19:20:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.076 19:20:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:12.076 19:20:37 -- common/autotest_common.sh@10 -- # set +x 00:06:12.076 [2024-04-24 19:20:37.527721] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:12.076 [2024-04-24 19:20:37.527921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62746 ] 00:06:12.076 [2024-04-24 19:20:37.697179] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.334 [2024-04-24 19:20:37.964818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.334 [2024-04-24 19:20:37.964855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.739 19:20:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:13.739 19:20:39 -- common/autotest_common.sh@850 -- # return 0 00:06:13.739 19:20:39 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:13.739 19:20:39 -- spdkcli/tcp.sh@31 -- # socat_pid=62769 00:06:13.739 19:20:39 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:13.739 [ 00:06:13.739 "bdev_malloc_delete", 00:06:13.739 "bdev_malloc_create", 00:06:13.739 "bdev_null_resize", 00:06:13.739 "bdev_null_delete", 00:06:13.739 "bdev_null_create", 00:06:13.739 "bdev_nvme_cuse_unregister", 00:06:13.739 "bdev_nvme_cuse_register", 00:06:13.739 "bdev_opal_new_user", 00:06:13.739 "bdev_opal_set_lock_state", 00:06:13.739 "bdev_opal_delete", 00:06:13.739 "bdev_opal_get_info", 00:06:13.739 "bdev_opal_create", 00:06:13.739 "bdev_nvme_opal_revert", 00:06:13.739 "bdev_nvme_opal_init", 00:06:13.739 "bdev_nvme_send_cmd", 00:06:13.739 "bdev_nvme_get_path_iostat", 00:06:13.739 "bdev_nvme_get_mdns_discovery_info", 00:06:13.739 "bdev_nvme_stop_mdns_discovery", 00:06:13.739 "bdev_nvme_start_mdns_discovery", 00:06:13.739 "bdev_nvme_set_multipath_policy", 00:06:13.739 "bdev_nvme_set_preferred_path", 00:06:13.739 "bdev_nvme_get_io_paths", 00:06:13.739 "bdev_nvme_remove_error_injection", 00:06:13.739 "bdev_nvme_add_error_injection", 00:06:13.739 "bdev_nvme_get_discovery_info", 00:06:13.739 "bdev_nvme_stop_discovery", 00:06:13.739 "bdev_nvme_start_discovery", 00:06:13.739 "bdev_nvme_get_controller_health_info", 00:06:13.739 "bdev_nvme_disable_controller", 00:06:13.739 "bdev_nvme_enable_controller", 00:06:13.739 "bdev_nvme_reset_controller", 00:06:13.739 "bdev_nvme_get_transport_statistics", 00:06:13.739 "bdev_nvme_apply_firmware", 00:06:13.739 "bdev_nvme_detach_controller", 00:06:13.739 "bdev_nvme_get_controllers", 00:06:13.739 "bdev_nvme_attach_controller", 00:06:13.739 "bdev_nvme_set_hotplug", 00:06:13.739 "bdev_nvme_set_options", 00:06:13.739 "bdev_passthru_delete", 00:06:13.739 "bdev_passthru_create", 00:06:13.739 "bdev_lvol_grow_lvstore", 00:06:13.739 "bdev_lvol_get_lvols", 00:06:13.739 "bdev_lvol_get_lvstores", 00:06:13.739 "bdev_lvol_delete", 00:06:13.739 "bdev_lvol_set_read_only", 00:06:13.739 "bdev_lvol_resize", 00:06:13.739 "bdev_lvol_decouple_parent", 00:06:13.739 "bdev_lvol_inflate", 00:06:13.739 "bdev_lvol_rename", 00:06:13.739 "bdev_lvol_clone_bdev", 00:06:13.739 "bdev_lvol_clone", 00:06:13.739 "bdev_lvol_snapshot", 00:06:13.739 "bdev_lvol_create", 00:06:13.739 "bdev_lvol_delete_lvstore", 00:06:13.739 "bdev_lvol_rename_lvstore", 00:06:13.739 "bdev_lvol_create_lvstore", 00:06:13.739 "bdev_raid_set_options", 00:06:13.739 "bdev_raid_remove_base_bdev", 00:06:13.739 "bdev_raid_add_base_bdev", 00:06:13.739 "bdev_raid_delete", 00:06:13.739 "bdev_raid_create", 00:06:13.739 "bdev_raid_get_bdevs", 00:06:13.739 "bdev_error_inject_error", 00:06:13.739 "bdev_error_delete", 00:06:13.739 "bdev_error_create", 00:06:13.739 "bdev_split_delete", 00:06:13.739 "bdev_split_create", 00:06:13.739 "bdev_delay_delete", 00:06:13.739 "bdev_delay_create", 00:06:13.739 "bdev_delay_update_latency", 00:06:13.739 "bdev_zone_block_delete", 00:06:13.739 "bdev_zone_block_create", 00:06:13.739 "blobfs_create", 00:06:13.739 "blobfs_detect", 00:06:13.739 "blobfs_set_cache_size", 00:06:13.739 "bdev_xnvme_delete", 00:06:13.739 "bdev_xnvme_create", 00:06:13.739 "bdev_aio_delete", 00:06:13.739 "bdev_aio_rescan", 00:06:13.739 "bdev_aio_create", 00:06:13.739 "bdev_ftl_set_property", 00:06:13.739 "bdev_ftl_get_properties", 00:06:13.739 "bdev_ftl_get_stats", 00:06:13.739 "bdev_ftl_unmap", 00:06:13.739 "bdev_ftl_unload", 00:06:13.739 "bdev_ftl_delete", 00:06:13.739 "bdev_ftl_load", 00:06:13.739 "bdev_ftl_create", 00:06:13.739 "bdev_virtio_attach_controller", 00:06:13.739 "bdev_virtio_scsi_get_devices", 00:06:13.739 "bdev_virtio_detach_controller", 00:06:13.739 "bdev_virtio_blk_set_hotplug", 00:06:13.739 "bdev_iscsi_delete", 00:06:13.739 "bdev_iscsi_create", 00:06:13.739 "bdev_iscsi_set_options", 00:06:13.739 "accel_error_inject_error", 00:06:13.739 "ioat_scan_accel_module", 00:06:13.739 "dsa_scan_accel_module", 00:06:13.739 "iaa_scan_accel_module", 00:06:13.739 "keyring_file_remove_key", 00:06:13.739 "keyring_file_add_key", 00:06:13.739 "iscsi_set_options", 00:06:13.739 "iscsi_get_auth_groups", 00:06:13.739 "iscsi_auth_group_remove_secret", 00:06:13.739 "iscsi_auth_group_add_secret", 00:06:13.739 "iscsi_delete_auth_group", 00:06:13.739 "iscsi_create_auth_group", 00:06:13.739 "iscsi_set_discovery_auth", 00:06:13.739 "iscsi_get_options", 00:06:13.739 "iscsi_target_node_request_logout", 00:06:13.739 "iscsi_target_node_set_redirect", 00:06:13.739 "iscsi_target_node_set_auth", 00:06:13.739 "iscsi_target_node_add_lun", 00:06:13.739 "iscsi_get_stats", 00:06:13.739 "iscsi_get_connections", 00:06:13.739 "iscsi_portal_group_set_auth", 00:06:13.739 "iscsi_start_portal_group", 00:06:13.739 "iscsi_delete_portal_group", 00:06:13.739 "iscsi_create_portal_group", 00:06:13.739 "iscsi_get_portal_groups", 00:06:13.739 "iscsi_delete_target_node", 00:06:13.739 "iscsi_target_node_remove_pg_ig_maps", 00:06:13.739 "iscsi_target_node_add_pg_ig_maps", 00:06:13.739 "iscsi_create_target_node", 00:06:13.739 "iscsi_get_target_nodes", 00:06:13.739 "iscsi_delete_initiator_group", 00:06:13.739 "iscsi_initiator_group_remove_initiators", 00:06:13.739 "iscsi_initiator_group_add_initiators", 00:06:13.739 "iscsi_create_initiator_group", 00:06:13.739 "iscsi_get_initiator_groups", 00:06:13.739 "nvmf_set_crdt", 00:06:13.739 "nvmf_set_config", 00:06:13.739 "nvmf_set_max_subsystems", 00:06:13.739 "nvmf_subsystem_get_listeners", 00:06:13.739 "nvmf_subsystem_get_qpairs", 00:06:13.739 "nvmf_subsystem_get_controllers", 00:06:13.739 "nvmf_get_stats", 00:06:13.739 "nvmf_get_transports", 00:06:13.739 "nvmf_create_transport", 00:06:13.739 "nvmf_get_targets", 00:06:13.739 "nvmf_delete_target", 00:06:13.739 "nvmf_create_target", 00:06:13.739 "nvmf_subsystem_allow_any_host", 00:06:13.739 "nvmf_subsystem_remove_host", 00:06:13.739 "nvmf_subsystem_add_host", 00:06:13.739 "nvmf_ns_remove_host", 00:06:13.739 "nvmf_ns_add_host", 00:06:13.739 "nvmf_subsystem_remove_ns", 00:06:13.739 "nvmf_subsystem_add_ns", 00:06:13.739 "nvmf_subsystem_listener_set_ana_state", 00:06:13.739 "nvmf_discovery_get_referrals", 00:06:13.739 "nvmf_discovery_remove_referral", 00:06:13.739 "nvmf_discovery_add_referral", 00:06:13.739 "nvmf_subsystem_remove_listener", 00:06:13.739 "nvmf_subsystem_add_listener", 00:06:13.739 "nvmf_delete_subsystem", 00:06:13.739 "nvmf_create_subsystem", 00:06:13.739 "nvmf_get_subsystems", 00:06:13.739 "env_dpdk_get_mem_stats", 00:06:13.739 "nbd_get_disks", 00:06:13.739 "nbd_stop_disk", 00:06:13.739 "nbd_start_disk", 00:06:13.739 "ublk_recover_disk", 00:06:13.739 "ublk_get_disks", 00:06:13.739 "ublk_stop_disk", 00:06:13.739 "ublk_start_disk", 00:06:13.739 "ublk_destroy_target", 00:06:13.739 "ublk_create_target", 00:06:13.739 "virtio_blk_create_transport", 00:06:13.739 "virtio_blk_get_transports", 00:06:13.739 "vhost_controller_set_coalescing", 00:06:13.739 "vhost_get_controllers", 00:06:13.739 "vhost_delete_controller", 00:06:13.739 "vhost_create_blk_controller", 00:06:13.739 "vhost_scsi_controller_remove_target", 00:06:13.739 "vhost_scsi_controller_add_target", 00:06:13.739 "vhost_start_scsi_controller", 00:06:13.739 "vhost_create_scsi_controller", 00:06:13.739 "thread_set_cpumask", 00:06:13.739 "framework_get_scheduler", 00:06:13.739 "framework_set_scheduler", 00:06:13.739 "framework_get_reactors", 00:06:13.739 "thread_get_io_channels", 00:06:13.739 "thread_get_pollers", 00:06:13.739 "thread_get_stats", 00:06:13.740 "framework_monitor_context_switch", 00:06:13.740 "spdk_kill_instance", 00:06:13.740 "log_enable_timestamps", 00:06:13.740 "log_get_flags", 00:06:13.740 "log_clear_flag", 00:06:13.740 "log_set_flag", 00:06:13.740 "log_get_level", 00:06:13.740 "log_set_level", 00:06:13.740 "log_get_print_level", 00:06:13.740 "log_set_print_level", 00:06:13.740 "framework_enable_cpumask_locks", 00:06:13.740 "framework_disable_cpumask_locks", 00:06:13.740 "framework_wait_init", 00:06:13.740 "framework_start_init", 00:06:13.740 "scsi_get_devices", 00:06:13.740 "bdev_get_histogram", 00:06:13.740 "bdev_enable_histogram", 00:06:13.740 "bdev_set_qos_limit", 00:06:13.740 "bdev_set_qd_sampling_period", 00:06:13.740 "bdev_get_bdevs", 00:06:13.740 "bdev_reset_iostat", 00:06:13.740 "bdev_get_iostat", 00:06:13.740 "bdev_examine", 00:06:13.740 "bdev_wait_for_examine", 00:06:13.740 "bdev_set_options", 00:06:13.740 "notify_get_notifications", 00:06:13.740 "notify_get_types", 00:06:13.740 "accel_get_stats", 00:06:13.740 "accel_set_options", 00:06:13.740 "accel_set_driver", 00:06:13.740 "accel_crypto_key_destroy", 00:06:13.740 "accel_crypto_keys_get", 00:06:13.740 "accel_crypto_key_create", 00:06:13.740 "accel_assign_opc", 00:06:13.740 "accel_get_module_info", 00:06:13.740 "accel_get_opc_assignments", 00:06:13.740 "vmd_rescan", 00:06:13.740 "vmd_remove_device", 00:06:13.740 "vmd_enable", 00:06:13.740 "sock_set_default_impl", 00:06:13.740 "sock_impl_set_options", 00:06:13.740 "sock_impl_get_options", 00:06:13.740 "iobuf_get_stats", 00:06:13.740 "iobuf_set_options", 00:06:13.740 "framework_get_pci_devices", 00:06:13.740 "framework_get_config", 00:06:13.740 "framework_get_subsystems", 00:06:13.740 "trace_get_info", 00:06:13.740 "trace_get_tpoint_group_mask", 00:06:13.740 "trace_disable_tpoint_group", 00:06:13.740 "trace_enable_tpoint_group", 00:06:13.740 "trace_clear_tpoint_mask", 00:06:13.740 "trace_set_tpoint_mask", 00:06:13.740 "keyring_get_keys", 00:06:13.740 "spdk_get_version", 00:06:13.740 "rpc_get_methods" 00:06:13.740 ] 00:06:13.740 19:20:39 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:13.740 19:20:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:13.740 19:20:39 -- common/autotest_common.sh@10 -- # set +x 00:06:13.740 19:20:39 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:13.740 19:20:39 -- spdkcli/tcp.sh@38 -- # killprocess 62746 00:06:13.740 19:20:39 -- common/autotest_common.sh@936 -- # '[' -z 62746 ']' 00:06:13.740 19:20:39 -- common/autotest_common.sh@940 -- # kill -0 62746 00:06:13.740 19:20:39 -- common/autotest_common.sh@941 -- # uname 00:06:13.740 19:20:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:13.740 19:20:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62746 00:06:13.740 19:20:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:13.740 19:20:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:13.740 19:20:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62746' 00:06:13.740 killing process with pid 62746 00:06:13.740 19:20:39 -- common/autotest_common.sh@955 -- # kill 62746 00:06:13.740 19:20:39 -- common/autotest_common.sh@960 -- # wait 62746 00:06:17.032 00:06:17.032 real 0m4.876s 00:06:17.032 user 0m8.625s 00:06:17.032 sys 0m0.574s 00:06:17.032 19:20:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.032 19:20:42 -- common/autotest_common.sh@10 -- # set +x 00:06:17.032 ************************************ 00:06:17.032 END TEST spdkcli_tcp 00:06:17.032 ************************************ 00:06:17.032 19:20:42 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:17.032 19:20:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:17.032 19:20:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.032 19:20:42 -- common/autotest_common.sh@10 -- # set +x 00:06:17.032 ************************************ 00:06:17.032 START TEST dpdk_mem_utility 00:06:17.032 ************************************ 00:06:17.032 19:20:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:17.032 * Looking for test storage... 00:06:17.032 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:17.032 19:20:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:17.032 19:20:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:17.032 19:20:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=62876 00:06:17.032 19:20:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 62876 00:06:17.032 19:20:42 -- common/autotest_common.sh@817 -- # '[' -z 62876 ']' 00:06:17.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.032 19:20:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.032 19:20:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:17.032 19:20:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.032 19:20:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:17.032 19:20:42 -- common/autotest_common.sh@10 -- # set +x 00:06:17.032 [2024-04-24 19:20:42.517423] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:17.032 [2024-04-24 19:20:42.517547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62876 ] 00:06:17.032 [2024-04-24 19:20:42.686300] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.290 [2024-04-24 19:20:42.954121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.669 19:20:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:18.669 19:20:44 -- common/autotest_common.sh@850 -- # return 0 00:06:18.669 19:20:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:18.669 19:20:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:18.669 19:20:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.669 19:20:44 -- common/autotest_common.sh@10 -- # set +x 00:06:18.669 { 00:06:18.669 "filename": "/tmp/spdk_mem_dump.txt" 00:06:18.669 } 00:06:18.669 19:20:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.669 19:20:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:18.669 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:18.669 1 heaps totaling size 820.000000 MiB 00:06:18.669 size: 820.000000 MiB heap id: 0 00:06:18.669 end heaps---------- 00:06:18.669 8 mempools totaling size 598.116089 MiB 00:06:18.669 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:18.669 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:18.669 size: 84.521057 MiB name: bdev_io_62876 00:06:18.669 size: 51.011292 MiB name: evtpool_62876 00:06:18.669 size: 50.003479 MiB name: msgpool_62876 00:06:18.669 size: 21.763794 MiB name: PDU_Pool 00:06:18.669 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:18.669 size: 0.026123 MiB name: Session_Pool 00:06:18.669 end mempools------- 00:06:18.669 6 memzones totaling size 4.142822 MiB 00:06:18.669 size: 1.000366 MiB name: RG_ring_0_62876 00:06:18.669 size: 1.000366 MiB name: RG_ring_1_62876 00:06:18.669 size: 1.000366 MiB name: RG_ring_4_62876 00:06:18.669 size: 1.000366 MiB name: RG_ring_5_62876 00:06:18.669 size: 0.125366 MiB name: RG_ring_2_62876 00:06:18.669 size: 0.015991 MiB name: RG_ring_3_62876 00:06:18.669 end memzones------- 00:06:18.669 19:20:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:18.669 heap id: 0 total size: 820.000000 MiB number of busy elements: 296 number of free elements: 18 00:06:18.669 list of free elements. size: 18.452515 MiB 00:06:18.669 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:18.669 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:18.669 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:18.669 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:18.669 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:18.669 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:18.669 element at address: 0x200019600000 with size: 0.999084 MiB 00:06:18.669 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:18.669 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:18.669 element at address: 0x200018e00000 with size: 0.959656 MiB 00:06:18.669 element at address: 0x200019900040 with size: 0.936401 MiB 00:06:18.669 element at address: 0x200000200000 with size: 0.830200 MiB 00:06:18.669 element at address: 0x20001b000000 with size: 0.565125 MiB 00:06:18.669 element at address: 0x200019200000 with size: 0.487976 MiB 00:06:18.669 element at address: 0x200019a00000 with size: 0.485413 MiB 00:06:18.669 element at address: 0x200013800000 with size: 0.467651 MiB 00:06:18.669 element at address: 0x200028400000 with size: 0.390442 MiB 00:06:18.669 element at address: 0x200003a00000 with size: 0.351990 MiB 00:06:18.669 list of standard malloc elements. size: 199.283081 MiB 00:06:18.669 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:18.669 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:18.669 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:18.669 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:18.669 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:18.669 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:18.669 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:18.669 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:18.669 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:06:18.669 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:06:18.669 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:06:18.669 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:06:18.669 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:06:18.669 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:06:18.669 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:06:18.669 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:06:18.669 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:06:18.669 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:06:18.669 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:06:18.669 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200013877b80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200013877c80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200013877d80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200013877e80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200013877f80 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200013878080 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200013878180 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200013878280 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200013878380 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200013878480 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200013878580 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x200019abc680 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:06:18.670 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:06:18.671 element at address: 0x200028463f40 with size: 0.000244 MiB 00:06:18.671 element at address: 0x200028464040 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846af80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846b080 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846b180 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846b280 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846b380 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846b480 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846b580 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846b680 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846b780 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846b880 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846b980 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846be80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846c080 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846c180 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846c280 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846c380 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846c480 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846c580 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846c680 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846c780 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846c880 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846c980 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846d080 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846d180 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846d280 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846d380 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846d480 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846d580 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846d680 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846d780 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846d880 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846d980 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846da80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846db80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846de80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846df80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846e080 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846e180 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846e280 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846e380 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846e480 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846e580 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846e680 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846e780 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846e880 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846e980 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846f080 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846f180 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846f280 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846f380 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846f480 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846f580 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846f680 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846f780 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846f880 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846f980 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:06:18.671 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:06:18.671 list of memzone associated elements. size: 602.264404 MiB 00:06:18.671 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:18.671 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:18.671 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:18.671 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:18.671 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:18.671 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_62876_0 00:06:18.671 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:18.671 associated memzone info: size: 48.002930 MiB name: MP_evtpool_62876_0 00:06:18.671 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:18.671 associated memzone info: size: 48.002930 MiB name: MP_msgpool_62876_0 00:06:18.671 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:18.671 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:18.671 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:18.671 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:18.671 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:18.671 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_62876 00:06:18.671 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:18.671 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_62876 00:06:18.671 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:18.671 associated memzone info: size: 1.007996 MiB name: MP_evtpool_62876 00:06:18.671 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:18.671 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:18.671 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:18.671 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:18.671 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:18.671 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:18.672 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:18.672 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:18.672 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:18.672 associated memzone info: size: 1.000366 MiB name: RG_ring_0_62876 00:06:18.672 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:18.672 associated memzone info: size: 1.000366 MiB name: RG_ring_1_62876 00:06:18.672 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:18.672 associated memzone info: size: 1.000366 MiB name: RG_ring_4_62876 00:06:18.672 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:18.672 associated memzone info: size: 1.000366 MiB name: RG_ring_5_62876 00:06:18.672 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:18.672 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_62876 00:06:18.672 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:06:18.672 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:18.672 element at address: 0x200013878680 with size: 0.500549 MiB 00:06:18.672 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:18.672 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:06:18.672 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:18.672 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:18.672 associated memzone info: size: 0.125366 MiB name: RG_ring_2_62876 00:06:18.672 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:06:18.672 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:18.672 element at address: 0x200028464140 with size: 0.023804 MiB 00:06:18.672 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:18.672 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:18.672 associated memzone info: size: 0.015991 MiB name: RG_ring_3_62876 00:06:18.672 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:06:18.672 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:18.672 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:06:18.672 associated memzone info: size: 0.000183 MiB name: MP_msgpool_62876 00:06:18.672 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:18.672 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_62876 00:06:18.672 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:06:18.672 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:18.672 19:20:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:18.672 19:20:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 62876 00:06:18.672 19:20:44 -- common/autotest_common.sh@936 -- # '[' -z 62876 ']' 00:06:18.672 19:20:44 -- common/autotest_common.sh@940 -- # kill -0 62876 00:06:18.672 19:20:44 -- common/autotest_common.sh@941 -- # uname 00:06:18.672 19:20:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.672 19:20:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62876 00:06:18.672 19:20:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.672 19:20:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.672 19:20:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62876' 00:06:18.672 killing process with pid 62876 00:06:18.672 19:20:44 -- common/autotest_common.sh@955 -- # kill 62876 00:06:18.672 19:20:44 -- common/autotest_common.sh@960 -- # wait 62876 00:06:21.960 00:06:21.960 real 0m4.633s 00:06:21.960 user 0m4.574s 00:06:21.960 sys 0m0.530s 00:06:21.960 ************************************ 00:06:21.960 END TEST dpdk_mem_utility 00:06:21.960 ************************************ 00:06:21.960 19:20:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.960 19:20:46 -- common/autotest_common.sh@10 -- # set +x 00:06:21.960 19:20:46 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:21.960 19:20:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:21.960 19:20:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.960 19:20:46 -- common/autotest_common.sh@10 -- # set +x 00:06:21.960 ************************************ 00:06:21.960 START TEST event 00:06:21.960 ************************************ 00:06:21.960 19:20:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:21.960 * Looking for test storage... 00:06:21.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:21.960 19:20:47 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:21.960 19:20:47 -- bdev/nbd_common.sh@6 -- # set -e 00:06:21.960 19:20:47 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:21.960 19:20:47 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:21.960 19:20:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.960 19:20:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.960 ************************************ 00:06:21.960 START TEST event_perf 00:06:21.960 ************************************ 00:06:21.960 19:20:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:21.961 Running I/O for 1 seconds...[2024-04-24 19:20:47.319602] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:21.961 [2024-04-24 19:20:47.319798] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62995 ] 00:06:21.961 [2024-04-24 19:20:47.501870] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:22.219 [2024-04-24 19:20:47.771403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.220 [2024-04-24 19:20:47.771652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.220 [2024-04-24 19:20:47.771560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.220 Running I/O for 1 seconds...[2024-04-24 19:20:47.771721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.597 00:06:23.597 lcore 0: 190842 00:06:23.597 lcore 1: 190843 00:06:23.597 lcore 2: 190841 00:06:23.597 lcore 3: 190843 00:06:23.597 done. 00:06:23.597 00:06:23.597 real 0m1.935s 00:06:23.597 user 0m4.684s 00:06:23.597 sys 0m0.127s 00:06:23.597 19:20:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:23.597 ************************************ 00:06:23.597 END TEST event_perf 00:06:23.597 ************************************ 00:06:23.597 19:20:49 -- common/autotest_common.sh@10 -- # set +x 00:06:23.597 19:20:49 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:23.597 19:20:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:23.597 19:20:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:23.597 19:20:49 -- common/autotest_common.sh@10 -- # set +x 00:06:23.857 ************************************ 00:06:23.857 START TEST event_reactor 00:06:23.857 ************************************ 00:06:23.857 19:20:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:23.857 [2024-04-24 19:20:49.400379] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:23.857 [2024-04-24 19:20:49.400496] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63045 ] 00:06:24.117 [2024-04-24 19:20:49.565618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.376 [2024-04-24 19:20:49.844084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.754 test_start 00:06:25.754 oneshot 00:06:25.754 tick 100 00:06:25.754 tick 100 00:06:25.754 tick 250 00:06:25.754 tick 100 00:06:25.754 tick 100 00:06:25.754 tick 250 00:06:25.754 tick 100 00:06:25.754 tick 500 00:06:25.754 tick 100 00:06:25.754 tick 100 00:06:25.754 tick 250 00:06:25.754 tick 100 00:06:25.754 tick 100 00:06:25.754 test_end 00:06:25.754 ************************************ 00:06:25.754 END TEST event_reactor 00:06:25.754 ************************************ 00:06:25.754 00:06:25.754 real 0m1.930s 00:06:25.754 user 0m1.720s 00:06:25.754 sys 0m0.100s 00:06:25.754 19:20:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.754 19:20:51 -- common/autotest_common.sh@10 -- # set +x 00:06:25.754 19:20:51 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.754 19:20:51 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:25.754 19:20:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.755 19:20:51 -- common/autotest_common.sh@10 -- # set +x 00:06:25.755 ************************************ 00:06:25.755 START TEST event_reactor_perf 00:06:25.755 ************************************ 00:06:25.755 19:20:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:25.755 [2024-04-24 19:20:51.417903] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:25.755 [2024-04-24 19:20:51.418047] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63091 ] 00:06:26.013 [2024-04-24 19:20:51.591348] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.271 [2024-04-24 19:20:51.866750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.658 test_start 00:06:27.658 test_end 00:06:27.658 Performance: 310639 events per second 00:06:27.658 00:06:27.658 real 0m1.941s 00:06:27.658 user 0m1.734s 00:06:27.658 sys 0m0.096s 00:06:27.658 19:20:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.658 ************************************ 00:06:27.658 END TEST event_reactor_perf 00:06:27.658 ************************************ 00:06:27.658 19:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.918 19:20:53 -- event/event.sh@49 -- # uname -s 00:06:27.918 19:20:53 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:27.918 19:20:53 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:27.918 19:20:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.918 19:20:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.918 19:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.918 ************************************ 00:06:27.918 START TEST event_scheduler 00:06:27.918 ************************************ 00:06:27.918 19:20:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:27.918 * Looking for test storage... 00:06:27.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:27.918 19:20:53 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:27.918 19:20:53 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:27.918 19:20:53 -- scheduler/scheduler.sh@35 -- # scheduler_pid=63164 00:06:27.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.918 19:20:53 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.918 19:20:53 -- scheduler/scheduler.sh@37 -- # waitforlisten 63164 00:06:27.918 19:20:53 -- common/autotest_common.sh@817 -- # '[' -z 63164 ']' 00:06:27.918 19:20:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.918 19:20:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:27.918 19:20:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.918 19:20:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:27.918 19:20:53 -- common/autotest_common.sh@10 -- # set +x 00:06:28.178 [2024-04-24 19:20:53.675818] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:28.178 [2024-04-24 19:20:53.675945] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63164 ] 00:06:28.178 [2024-04-24 19:20:53.844500] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.747 [2024-04-24 19:20:54.117169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.747 [2024-04-24 19:20:54.117271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.747 [2024-04-24 19:20:54.117356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.747 [2024-04-24 19:20:54.117385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.006 19:20:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:29.006 19:20:54 -- common/autotest_common.sh@850 -- # return 0 00:06:29.006 19:20:54 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:29.006 19:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.006 19:20:54 -- common/autotest_common.sh@10 -- # set +x 00:06:29.006 POWER: Env isn't set yet! 00:06:29.006 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:29.006 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:29.006 POWER: Cannot set governor of lcore 0 to userspace 00:06:29.006 POWER: Attempting to initialise PSTAT power management... 00:06:29.006 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:29.006 POWER: Cannot set governor of lcore 0 to performance 00:06:29.006 POWER: Attempting to initialise AMD PSTATE power management... 00:06:29.006 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:29.006 POWER: Cannot set governor of lcore 0 to userspace 00:06:29.006 POWER: Attempting to initialise CPPC power management... 00:06:29.006 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:29.006 POWER: Cannot set governor of lcore 0 to userspace 00:06:29.006 POWER: Attempting to initialise VM power management... 00:06:29.006 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:29.006 POWER: Unable to set Power Management Environment for lcore 0 00:06:29.006 [2024-04-24 19:20:54.518114] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:29.007 [2024-04-24 19:20:54.518136] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:29.007 [2024-04-24 19:20:54.518146] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:29.007 19:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.007 19:20:54 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:29.007 19:20:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.007 19:20:54 -- common/autotest_common.sh@10 -- # set +x 00:06:29.266 [2024-04-24 19:20:54.932258] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:29.266 19:20:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.266 19:20:54 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:29.266 19:20:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.266 19:20:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.266 19:20:54 -- common/autotest_common.sh@10 -- # set +x 00:06:29.526 ************************************ 00:06:29.526 START TEST scheduler_create_thread 00:06:29.526 ************************************ 00:06:29.526 19:20:55 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:06:29.526 19:20:55 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:29.526 19:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.526 19:20:55 -- common/autotest_common.sh@10 -- # set +x 00:06:29.526 2 00:06:29.526 19:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.526 19:20:55 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:29.526 19:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.526 19:20:55 -- common/autotest_common.sh@10 -- # set +x 00:06:29.526 3 00:06:29.526 19:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.526 19:20:55 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:29.526 19:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.526 19:20:55 -- common/autotest_common.sh@10 -- # set +x 00:06:29.526 4 00:06:29.526 19:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.526 19:20:55 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:29.526 19:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.526 19:20:55 -- common/autotest_common.sh@10 -- # set +x 00:06:29.526 5 00:06:29.526 19:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.526 19:20:55 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:29.526 19:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.526 19:20:55 -- common/autotest_common.sh@10 -- # set +x 00:06:29.526 6 00:06:29.526 19:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.526 19:20:55 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:29.526 19:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.526 19:20:55 -- common/autotest_common.sh@10 -- # set +x 00:06:29.526 7 00:06:29.526 19:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.526 19:20:55 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:29.526 19:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.526 19:20:55 -- common/autotest_common.sh@10 -- # set +x 00:06:29.526 8 00:06:29.526 19:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.526 19:20:55 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:29.526 19:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.526 19:20:55 -- common/autotest_common.sh@10 -- # set +x 00:06:29.526 9 00:06:29.526 19:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.526 19:20:55 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:29.526 19:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.526 19:20:55 -- common/autotest_common.sh@10 -- # set +x 00:06:29.526 10 00:06:29.526 19:20:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.526 19:20:55 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:29.526 19:20:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.526 19:20:55 -- common/autotest_common.sh@10 -- # set +x 00:06:30.903 19:20:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:30.903 19:20:56 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:30.903 19:20:56 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:30.903 19:20:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:30.903 19:20:56 -- common/autotest_common.sh@10 -- # set +x 00:06:31.876 19:20:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:31.876 19:20:57 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:31.876 19:20:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:31.876 19:20:57 -- common/autotest_common.sh@10 -- # set +x 00:06:32.441 19:20:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:32.441 19:20:58 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:32.441 19:20:58 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:32.441 19:20:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:32.441 19:20:58 -- common/autotest_common.sh@10 -- # set +x 00:06:33.378 ************************************ 00:06:33.378 END TEST scheduler_create_thread 00:06:33.378 ************************************ 00:06:33.378 19:20:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:33.378 00:06:33.378 real 0m3.785s 00:06:33.378 user 0m0.022s 00:06:33.378 sys 0m0.009s 00:06:33.378 19:20:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.378 19:20:58 -- common/autotest_common.sh@10 -- # set +x 00:06:33.378 19:20:58 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:33.378 19:20:58 -- scheduler/scheduler.sh@46 -- # killprocess 63164 00:06:33.378 19:20:58 -- common/autotest_common.sh@936 -- # '[' -z 63164 ']' 00:06:33.378 19:20:58 -- common/autotest_common.sh@940 -- # kill -0 63164 00:06:33.378 19:20:58 -- common/autotest_common.sh@941 -- # uname 00:06:33.378 19:20:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.378 19:20:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63164 00:06:33.378 killing process with pid 63164 00:06:33.378 19:20:58 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:33.378 19:20:58 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:33.378 19:20:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63164' 00:06:33.378 19:20:58 -- common/autotest_common.sh@955 -- # kill 63164 00:06:33.378 19:20:58 -- common/autotest_common.sh@960 -- # wait 63164 00:06:33.638 [2024-04-24 19:20:59.083679] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:35.018 ************************************ 00:06:35.018 END TEST event_scheduler 00:06:35.018 ************************************ 00:06:35.018 00:06:35.018 real 0m7.115s 00:06:35.018 user 0m15.852s 00:06:35.018 sys 0m0.510s 00:06:35.018 19:21:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.018 19:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:35.018 19:21:00 -- event/event.sh@51 -- # modprobe -n nbd 00:06:35.018 19:21:00 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:35.018 19:21:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.018 19:21:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.018 19:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:35.289 ************************************ 00:06:35.289 START TEST app_repeat 00:06:35.289 ************************************ 00:06:35.289 19:21:00 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:06:35.289 19:21:00 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.289 19:21:00 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.289 19:21:00 -- event/event.sh@13 -- # local nbd_list 00:06:35.289 19:21:00 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.289 19:21:00 -- event/event.sh@14 -- # local bdev_list 00:06:35.289 19:21:00 -- event/event.sh@15 -- # local repeat_times=4 00:06:35.289 19:21:00 -- event/event.sh@17 -- # modprobe nbd 00:06:35.289 Process app_repeat pid: 63301 00:06:35.289 19:21:00 -- event/event.sh@19 -- # repeat_pid=63301 00:06:35.289 19:21:00 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.289 19:21:00 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63301' 00:06:35.289 19:21:00 -- event/event.sh@23 -- # for i in {0..2} 00:06:35.289 spdk_app_start Round 0 00:06:35.289 19:21:00 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:35.289 19:21:00 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:35.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:35.289 19:21:00 -- event/event.sh@25 -- # waitforlisten 63301 /var/tmp/spdk-nbd.sock 00:06:35.289 19:21:00 -- common/autotest_common.sh@817 -- # '[' -z 63301 ']' 00:06:35.289 19:21:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:35.289 19:21:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:35.289 19:21:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:35.289 19:21:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:35.289 19:21:00 -- common/autotest_common.sh@10 -- # set +x 00:06:35.289 [2024-04-24 19:21:00.782102] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:35.289 [2024-04-24 19:21:00.782236] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63301 ] 00:06:35.289 [2024-04-24 19:21:00.952955] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.561 [2024-04-24 19:21:01.231528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.561 [2024-04-24 19:21:01.231562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.129 19:21:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:36.129 19:21:01 -- common/autotest_common.sh@850 -- # return 0 00:06:36.129 19:21:01 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.387 Malloc0 00:06:36.387 19:21:02 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.646 Malloc1 00:06:36.905 19:21:02 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@12 -- # local i 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.905 19:21:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.905 /dev/nbd0 00:06:37.163 19:21:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.163 19:21:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.163 19:21:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:37.163 19:21:02 -- common/autotest_common.sh@855 -- # local i 00:06:37.163 19:21:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:37.163 19:21:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:37.163 19:21:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:37.163 19:21:02 -- common/autotest_common.sh@859 -- # break 00:06:37.163 19:21:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:37.163 19:21:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:37.163 19:21:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.163 1+0 records in 00:06:37.163 1+0 records out 00:06:37.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000372089 s, 11.0 MB/s 00:06:37.163 19:21:02 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.163 19:21:02 -- common/autotest_common.sh@872 -- # size=4096 00:06:37.163 19:21:02 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.163 19:21:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:37.163 19:21:02 -- common/autotest_common.sh@875 -- # return 0 00:06:37.163 19:21:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.163 19:21:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.163 19:21:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.422 /dev/nbd1 00:06:37.422 19:21:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.422 19:21:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.422 19:21:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:37.422 19:21:02 -- common/autotest_common.sh@855 -- # local i 00:06:37.422 19:21:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:37.422 19:21:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:37.422 19:21:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:37.422 19:21:02 -- common/autotest_common.sh@859 -- # break 00:06:37.422 19:21:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:37.422 19:21:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:37.422 19:21:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.422 1+0 records in 00:06:37.422 1+0 records out 00:06:37.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239492 s, 17.1 MB/s 00:06:37.422 19:21:02 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.422 19:21:02 -- common/autotest_common.sh@872 -- # size=4096 00:06:37.422 19:21:02 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.422 19:21:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:37.422 19:21:02 -- common/autotest_common.sh@875 -- # return 0 00:06:37.422 19:21:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.422 19:21:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.422 19:21:02 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.422 19:21:02 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.422 19:21:02 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.680 { 00:06:37.680 "nbd_device": "/dev/nbd0", 00:06:37.680 "bdev_name": "Malloc0" 00:06:37.680 }, 00:06:37.680 { 00:06:37.680 "nbd_device": "/dev/nbd1", 00:06:37.680 "bdev_name": "Malloc1" 00:06:37.680 } 00:06:37.680 ]' 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.680 { 00:06:37.680 "nbd_device": "/dev/nbd0", 00:06:37.680 "bdev_name": "Malloc0" 00:06:37.680 }, 00:06:37.680 { 00:06:37.680 "nbd_device": "/dev/nbd1", 00:06:37.680 "bdev_name": "Malloc1" 00:06:37.680 } 00:06:37.680 ]' 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.680 /dev/nbd1' 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.680 /dev/nbd1' 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@65 -- # count=2 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@95 -- # count=2 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.680 256+0 records in 00:06:37.680 256+0 records out 00:06:37.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134627 s, 77.9 MB/s 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.680 256+0 records in 00:06:37.680 256+0 records out 00:06:37.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025374 s, 41.3 MB/s 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.680 256+0 records in 00:06:37.680 256+0 records out 00:06:37.680 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0282898 s, 37.1 MB/s 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@51 -- # local i 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.680 19:21:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.938 19:21:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.938 19:21:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.938 19:21:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.938 19:21:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.938 19:21:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.938 19:21:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.938 19:21:03 -- bdev/nbd_common.sh@41 -- # break 00:06:37.938 19:21:03 -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.938 19:21:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.938 19:21:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.196 19:21:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.196 19:21:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.196 19:21:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.196 19:21:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.196 19:21:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.196 19:21:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.196 19:21:03 -- bdev/nbd_common.sh@41 -- # break 00:06:38.196 19:21:03 -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.196 19:21:03 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.196 19:21:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.196 19:21:03 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.455 19:21:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.455 19:21:03 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.455 19:21:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.455 19:21:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.455 19:21:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.455 19:21:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.455 19:21:04 -- bdev/nbd_common.sh@65 -- # true 00:06:38.455 19:21:04 -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.455 19:21:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.455 19:21:04 -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.455 19:21:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.455 19:21:04 -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.455 19:21:04 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.072 19:21:04 -- event/event.sh@35 -- # sleep 3 00:06:40.448 [2024-04-24 19:21:06.008744] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.706 [2024-04-24 19:21:06.277784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.706 [2024-04-24 19:21:06.277787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.963 [2024-04-24 19:21:06.559925] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:40.963 [2024-04-24 19:21:06.560010] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.898 19:21:07 -- event/event.sh@23 -- # for i in {0..2} 00:06:41.898 19:21:07 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:41.898 spdk_app_start Round 1 00:06:41.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.898 19:21:07 -- event/event.sh@25 -- # waitforlisten 63301 /var/tmp/spdk-nbd.sock 00:06:41.898 19:21:07 -- common/autotest_common.sh@817 -- # '[' -z 63301 ']' 00:06:41.898 19:21:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.898 19:21:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:41.898 19:21:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.898 19:21:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:41.898 19:21:07 -- common/autotest_common.sh@10 -- # set +x 00:06:42.155 19:21:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:42.155 19:21:07 -- common/autotest_common.sh@850 -- # return 0 00:06:42.155 19:21:07 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.416 Malloc0 00:06:42.416 19:21:08 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.675 Malloc1 00:06:42.675 19:21:08 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@12 -- # local i 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.675 19:21:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:42.932 /dev/nbd0 00:06:42.932 19:21:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:42.932 19:21:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:42.932 19:21:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:42.932 19:21:08 -- common/autotest_common.sh@855 -- # local i 00:06:42.932 19:21:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:42.932 19:21:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:42.932 19:21:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:42.932 19:21:08 -- common/autotest_common.sh@859 -- # break 00:06:42.932 19:21:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:42.932 19:21:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:42.932 19:21:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.932 1+0 records in 00:06:42.932 1+0 records out 00:06:42.932 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293671 s, 13.9 MB/s 00:06:42.932 19:21:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:42.932 19:21:08 -- common/autotest_common.sh@872 -- # size=4096 00:06:42.932 19:21:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:42.933 19:21:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:42.933 19:21:08 -- common/autotest_common.sh@875 -- # return 0 00:06:42.933 19:21:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:42.933 19:21:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.933 19:21:08 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:43.191 /dev/nbd1 00:06:43.191 19:21:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.191 19:21:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.191 19:21:08 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:43.191 19:21:08 -- common/autotest_common.sh@855 -- # local i 00:06:43.191 19:21:08 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:43.191 19:21:08 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:43.191 19:21:08 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:43.191 19:21:08 -- common/autotest_common.sh@859 -- # break 00:06:43.191 19:21:08 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:43.191 19:21:08 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:43.191 19:21:08 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.191 1+0 records in 00:06:43.191 1+0 records out 00:06:43.191 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269886 s, 15.2 MB/s 00:06:43.191 19:21:08 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.191 19:21:08 -- common/autotest_common.sh@872 -- # size=4096 00:06:43.191 19:21:08 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.191 19:21:08 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:43.191 19:21:08 -- common/autotest_common.sh@875 -- # return 0 00:06:43.191 19:21:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.191 19:21:08 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.191 19:21:08 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.191 19:21:08 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.191 19:21:08 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.449 { 00:06:43.449 "nbd_device": "/dev/nbd0", 00:06:43.449 "bdev_name": "Malloc0" 00:06:43.449 }, 00:06:43.449 { 00:06:43.449 "nbd_device": "/dev/nbd1", 00:06:43.449 "bdev_name": "Malloc1" 00:06:43.449 } 00:06:43.449 ]' 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.449 { 00:06:43.449 "nbd_device": "/dev/nbd0", 00:06:43.449 "bdev_name": "Malloc0" 00:06:43.449 }, 00:06:43.449 { 00:06:43.449 "nbd_device": "/dev/nbd1", 00:06:43.449 "bdev_name": "Malloc1" 00:06:43.449 } 00:06:43.449 ]' 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:43.449 /dev/nbd1' 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:43.449 /dev/nbd1' 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@65 -- # count=2 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@95 -- # count=2 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:43.449 256+0 records in 00:06:43.449 256+0 records out 00:06:43.449 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0118885 s, 88.2 MB/s 00:06:43.449 19:21:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.450 19:21:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:43.761 256+0 records in 00:06:43.761 256+0 records out 00:06:43.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239597 s, 43.8 MB/s 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:43.761 256+0 records in 00:06:43.761 256+0 records out 00:06:43.761 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268536 s, 39.0 MB/s 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@51 -- # local i 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@41 -- # break 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.761 19:21:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:44.019 19:21:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:44.019 19:21:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:44.019 19:21:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:44.019 19:21:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.019 19:21:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.019 19:21:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:44.019 19:21:09 -- bdev/nbd_common.sh@41 -- # break 00:06:44.019 19:21:09 -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.019 19:21:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.019 19:21:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.019 19:21:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.277 19:21:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.277 19:21:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.277 19:21:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.277 19:21:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.277 19:21:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.277 19:21:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.277 19:21:09 -- bdev/nbd_common.sh@65 -- # true 00:06:44.277 19:21:09 -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.277 19:21:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.277 19:21:09 -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.277 19:21:09 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.277 19:21:09 -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.277 19:21:09 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:44.846 19:21:10 -- event/event.sh@35 -- # sleep 3 00:06:46.785 [2024-04-24 19:21:11.982424] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.785 [2024-04-24 19:21:12.259206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.785 [2024-04-24 19:21:12.259227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.044 [2024-04-24 19:21:12.551443] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.044 [2024-04-24 19:21:12.551522] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:47.982 spdk_app_start Round 2 00:06:47.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:47.982 19:21:13 -- event/event.sh@23 -- # for i in {0..2} 00:06:47.982 19:21:13 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:47.982 19:21:13 -- event/event.sh@25 -- # waitforlisten 63301 /var/tmp/spdk-nbd.sock 00:06:47.982 19:21:13 -- common/autotest_common.sh@817 -- # '[' -z 63301 ']' 00:06:47.982 19:21:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:47.982 19:21:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:47.982 19:21:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:47.982 19:21:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:47.982 19:21:13 -- common/autotest_common.sh@10 -- # set +x 00:06:47.982 19:21:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:47.982 19:21:13 -- common/autotest_common.sh@850 -- # return 0 00:06:47.982 19:21:13 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.242 Malloc0 00:06:48.501 19:21:13 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.761 Malloc1 00:06:48.761 19:21:14 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@12 -- # local i 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.761 19:21:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:49.020 /dev/nbd0 00:06:49.020 19:21:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.020 19:21:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.020 19:21:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:49.020 19:21:14 -- common/autotest_common.sh@855 -- # local i 00:06:49.020 19:21:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:49.020 19:21:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:49.020 19:21:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:49.020 19:21:14 -- common/autotest_common.sh@859 -- # break 00:06:49.020 19:21:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:49.020 19:21:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:49.020 19:21:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.020 1+0 records in 00:06:49.020 1+0 records out 00:06:49.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403346 s, 10.2 MB/s 00:06:49.020 19:21:14 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.020 19:21:14 -- common/autotest_common.sh@872 -- # size=4096 00:06:49.020 19:21:14 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.020 19:21:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:49.020 19:21:14 -- common/autotest_common.sh@875 -- # return 0 00:06:49.020 19:21:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.020 19:21:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.020 19:21:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:49.279 /dev/nbd1 00:06:49.279 19:21:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.279 19:21:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.279 19:21:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:49.279 19:21:14 -- common/autotest_common.sh@855 -- # local i 00:06:49.279 19:21:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:49.279 19:21:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:49.279 19:21:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:49.279 19:21:14 -- common/autotest_common.sh@859 -- # break 00:06:49.279 19:21:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:49.279 19:21:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:49.279 19:21:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:49.279 1+0 records in 00:06:49.279 1+0 records out 00:06:49.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257164 s, 15.9 MB/s 00:06:49.279 19:21:14 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.279 19:21:14 -- common/autotest_common.sh@872 -- # size=4096 00:06:49.279 19:21:14 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:49.279 19:21:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:49.279 19:21:14 -- common/autotest_common.sh@875 -- # return 0 00:06:49.279 19:21:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.279 19:21:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:49.279 19:21:14 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.279 19:21:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.279 19:21:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.540 { 00:06:49.540 "nbd_device": "/dev/nbd0", 00:06:49.540 "bdev_name": "Malloc0" 00:06:49.540 }, 00:06:49.540 { 00:06:49.540 "nbd_device": "/dev/nbd1", 00:06:49.540 "bdev_name": "Malloc1" 00:06:49.540 } 00:06:49.540 ]' 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.540 { 00:06:49.540 "nbd_device": "/dev/nbd0", 00:06:49.540 "bdev_name": "Malloc0" 00:06:49.540 }, 00:06:49.540 { 00:06:49.540 "nbd_device": "/dev/nbd1", 00:06:49.540 "bdev_name": "Malloc1" 00:06:49.540 } 00:06:49.540 ]' 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.540 /dev/nbd1' 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.540 /dev/nbd1' 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@65 -- # count=2 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@95 -- # count=2 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:49.540 256+0 records in 00:06:49.540 256+0 records out 00:06:49.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532564 s, 197 MB/s 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:49.540 256+0 records in 00:06:49.540 256+0 records out 00:06:49.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207301 s, 50.6 MB/s 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:49.540 256+0 records in 00:06:49.540 256+0 records out 00:06:49.540 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025725 s, 40.8 MB/s 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.540 19:21:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:49.797 19:21:15 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:49.797 19:21:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:49.798 19:21:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.798 19:21:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.798 19:21:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.798 19:21:15 -- bdev/nbd_common.sh@51 -- # local i 00:06:49.798 19:21:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.798 19:21:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.081 19:21:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.081 19:21:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.081 19:21:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.081 19:21:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.081 19:21:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.081 19:21:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.081 19:21:15 -- bdev/nbd_common.sh@41 -- # break 00:06:50.081 19:21:15 -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.081 19:21:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.081 19:21:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:50.348 19:21:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:50.348 19:21:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:50.348 19:21:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:50.348 19:21:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.348 19:21:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.348 19:21:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:50.348 19:21:15 -- bdev/nbd_common.sh@41 -- # break 00:06:50.348 19:21:15 -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.348 19:21:15 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.348 19:21:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.348 19:21:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.348 19:21:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:50.348 19:21:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:50.348 19:21:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.607 19:21:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:50.607 19:21:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.607 19:21:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:50.607 19:21:16 -- bdev/nbd_common.sh@65 -- # true 00:06:50.607 19:21:16 -- bdev/nbd_common.sh@65 -- # count=0 00:06:50.607 19:21:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:50.607 19:21:16 -- bdev/nbd_common.sh@104 -- # count=0 00:06:50.607 19:21:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:50.607 19:21:16 -- bdev/nbd_common.sh@109 -- # return 0 00:06:50.607 19:21:16 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:50.867 19:21:16 -- event/event.sh@35 -- # sleep 3 00:06:52.769 [2024-04-24 19:21:18.141529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.769 [2024-04-24 19:21:18.414821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.769 [2024-04-24 19:21:18.414824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.027 [2024-04-24 19:21:18.701412] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.027 [2024-04-24 19:21:18.701514] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:53.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.969 19:21:19 -- event/event.sh@38 -- # waitforlisten 63301 /var/tmp/spdk-nbd.sock 00:06:53.969 19:21:19 -- common/autotest_common.sh@817 -- # '[' -z 63301 ']' 00:06:53.969 19:21:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.969 19:21:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:53.969 19:21:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.969 19:21:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:53.969 19:21:19 -- common/autotest_common.sh@10 -- # set +x 00:06:54.227 19:21:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:54.227 19:21:19 -- common/autotest_common.sh@850 -- # return 0 00:06:54.227 19:21:19 -- event/event.sh@39 -- # killprocess 63301 00:06:54.227 19:21:19 -- common/autotest_common.sh@936 -- # '[' -z 63301 ']' 00:06:54.227 19:21:19 -- common/autotest_common.sh@940 -- # kill -0 63301 00:06:54.227 19:21:19 -- common/autotest_common.sh@941 -- # uname 00:06:54.227 19:21:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:54.227 19:21:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63301 00:06:54.227 killing process with pid 63301 00:06:54.227 19:21:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:54.227 19:21:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:54.227 19:21:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63301' 00:06:54.227 19:21:19 -- common/autotest_common.sh@955 -- # kill 63301 00:06:54.227 19:21:19 -- common/autotest_common.sh@960 -- # wait 63301 00:06:55.601 spdk_app_start is called in Round 0. 00:06:55.601 Shutdown signal received, stop current app iteration 00:06:55.601 Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 reinitialization... 00:06:55.601 spdk_app_start is called in Round 1. 00:06:55.601 Shutdown signal received, stop current app iteration 00:06:55.601 Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 reinitialization... 00:06:55.601 spdk_app_start is called in Round 2. 00:06:55.601 Shutdown signal received, stop current app iteration 00:06:55.601 Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 reinitialization... 00:06:55.601 spdk_app_start is called in Round 3. 00:06:55.601 Shutdown signal received, stop current app iteration 00:06:55.601 ************************************ 00:06:55.601 END TEST app_repeat 00:06:55.601 ************************************ 00:06:55.601 19:21:21 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:55.601 19:21:21 -- event/event.sh@42 -- # return 0 00:06:55.601 00:06:55.601 real 0m20.400s 00:06:55.601 user 0m42.460s 00:06:55.601 sys 0m2.687s 00:06:55.601 19:21:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:55.601 19:21:21 -- common/autotest_common.sh@10 -- # set +x 00:06:55.601 19:21:21 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:55.601 19:21:21 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:55.601 19:21:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.601 19:21:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.601 19:21:21 -- common/autotest_common.sh@10 -- # set +x 00:06:55.601 ************************************ 00:06:55.601 START TEST cpu_locks 00:06:55.601 ************************************ 00:06:55.601 19:21:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:55.859 * Looking for test storage... 00:06:55.859 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:55.859 19:21:21 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:55.859 19:21:21 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:55.859 19:21:21 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:55.859 19:21:21 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:55.859 19:21:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:55.859 19:21:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:55.859 19:21:21 -- common/autotest_common.sh@10 -- # set +x 00:06:55.859 ************************************ 00:06:55.859 START TEST default_locks 00:06:55.859 ************************************ 00:06:55.859 19:21:21 -- common/autotest_common.sh@1111 -- # default_locks 00:06:55.859 19:21:21 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:55.859 19:21:21 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63761 00:06:55.859 19:21:21 -- event/cpu_locks.sh@47 -- # waitforlisten 63761 00:06:55.859 19:21:21 -- common/autotest_common.sh@817 -- # '[' -z 63761 ']' 00:06:55.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.859 19:21:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.859 19:21:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:55.859 19:21:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.859 19:21:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:55.859 19:21:21 -- common/autotest_common.sh@10 -- # set +x 00:06:56.117 [2024-04-24 19:21:21.548034] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:06:56.117 [2024-04-24 19:21:21.548236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63761 ] 00:06:56.117 [2024-04-24 19:21:21.718208] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.375 [2024-04-24 19:21:21.981910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.751 19:21:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:57.751 19:21:22 -- common/autotest_common.sh@850 -- # return 0 00:06:57.751 19:21:22 -- event/cpu_locks.sh@49 -- # locks_exist 63761 00:06:57.751 19:21:22 -- event/cpu_locks.sh@22 -- # lslocks -p 63761 00:06:57.751 19:21:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.751 19:21:23 -- event/cpu_locks.sh@50 -- # killprocess 63761 00:06:57.751 19:21:23 -- common/autotest_common.sh@936 -- # '[' -z 63761 ']' 00:06:57.751 19:21:23 -- common/autotest_common.sh@940 -- # kill -0 63761 00:06:57.751 19:21:23 -- common/autotest_common.sh@941 -- # uname 00:06:57.751 19:21:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:57.751 19:21:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63761 00:06:57.751 19:21:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:57.751 19:21:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:57.751 19:21:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63761' 00:06:57.751 killing process with pid 63761 00:06:57.751 19:21:23 -- common/autotest_common.sh@955 -- # kill 63761 00:06:57.751 19:21:23 -- common/autotest_common.sh@960 -- # wait 63761 00:07:00.281 19:21:25 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63761 00:07:00.281 19:21:25 -- common/autotest_common.sh@638 -- # local es=0 00:07:00.281 19:21:25 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 63761 00:07:00.281 19:21:25 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:00.281 19:21:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:00.281 19:21:25 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:00.281 19:21:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:00.281 19:21:25 -- common/autotest_common.sh@641 -- # waitforlisten 63761 00:07:00.281 19:21:25 -- common/autotest_common.sh@817 -- # '[' -z 63761 ']' 00:07:00.281 19:21:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.281 19:21:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:00.281 19:21:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.281 19:21:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:00.281 19:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.281 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (63761) - No such process 00:07:00.281 ERROR: process (pid: 63761) is no longer running 00:07:00.281 19:21:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:00.281 19:21:25 -- common/autotest_common.sh@850 -- # return 1 00:07:00.281 19:21:25 -- common/autotest_common.sh@641 -- # es=1 00:07:00.281 19:21:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:00.281 19:21:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:00.281 19:21:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:00.281 19:21:25 -- event/cpu_locks.sh@54 -- # no_locks 00:07:00.281 19:21:25 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:00.281 19:21:25 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:00.281 19:21:25 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:00.281 00:07:00.281 real 0m4.500s 00:07:00.281 user 0m4.409s 00:07:00.281 sys 0m0.586s 00:07:00.281 19:21:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:00.281 19:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.281 ************************************ 00:07:00.281 END TEST default_locks 00:07:00.281 ************************************ 00:07:00.539 19:21:25 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:00.539 19:21:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.539 19:21:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.539 19:21:25 -- common/autotest_common.sh@10 -- # set +x 00:07:00.539 ************************************ 00:07:00.539 START TEST default_locks_via_rpc 00:07:00.539 ************************************ 00:07:00.539 19:21:26 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:07:00.539 19:21:26 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63845 00:07:00.539 19:21:26 -- event/cpu_locks.sh@63 -- # waitforlisten 63845 00:07:00.539 19:21:26 -- common/autotest_common.sh@817 -- # '[' -z 63845 ']' 00:07:00.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.539 19:21:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.539 19:21:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:00.539 19:21:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.539 19:21:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:00.539 19:21:26 -- common/autotest_common.sh@10 -- # set +x 00:07:00.540 19:21:26 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.540 [2024-04-24 19:21:26.187552] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:00.540 [2024-04-24 19:21:26.187694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63845 ] 00:07:00.797 [2024-04-24 19:21:26.354916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.056 [2024-04-24 19:21:26.616213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.988 19:21:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:01.988 19:21:27 -- common/autotest_common.sh@850 -- # return 0 00:07:01.988 19:21:27 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:01.988 19:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.988 19:21:27 -- common/autotest_common.sh@10 -- # set +x 00:07:01.988 19:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.988 19:21:27 -- event/cpu_locks.sh@67 -- # no_locks 00:07:01.988 19:21:27 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:01.988 19:21:27 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:01.988 19:21:27 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:01.988 19:21:27 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:01.988 19:21:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.988 19:21:27 -- common/autotest_common.sh@10 -- # set +x 00:07:01.988 19:21:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.988 19:21:27 -- event/cpu_locks.sh@71 -- # locks_exist 63845 00:07:01.988 19:21:27 -- event/cpu_locks.sh@22 -- # lslocks -p 63845 00:07:01.988 19:21:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.552 19:21:27 -- event/cpu_locks.sh@73 -- # killprocess 63845 00:07:02.552 19:21:27 -- common/autotest_common.sh@936 -- # '[' -z 63845 ']' 00:07:02.552 19:21:27 -- common/autotest_common.sh@940 -- # kill -0 63845 00:07:02.552 19:21:27 -- common/autotest_common.sh@941 -- # uname 00:07:02.552 19:21:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.552 19:21:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63845 00:07:02.552 killing process with pid 63845 00:07:02.552 19:21:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.552 19:21:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.552 19:21:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63845' 00:07:02.552 19:21:27 -- common/autotest_common.sh@955 -- # kill 63845 00:07:02.552 19:21:27 -- common/autotest_common.sh@960 -- # wait 63845 00:07:05.080 ************************************ 00:07:05.080 END TEST default_locks_via_rpc 00:07:05.080 ************************************ 00:07:05.080 00:07:05.080 real 0m4.595s 00:07:05.080 user 0m4.529s 00:07:05.080 sys 0m0.582s 00:07:05.080 19:21:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.080 19:21:30 -- common/autotest_common.sh@10 -- # set +x 00:07:05.080 19:21:30 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:05.080 19:21:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.080 19:21:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.080 19:21:30 -- common/autotest_common.sh@10 -- # set +x 00:07:05.340 ************************************ 00:07:05.340 START TEST non_locking_app_on_locked_coremask 00:07:05.340 ************************************ 00:07:05.340 19:21:30 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:07:05.340 19:21:30 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63929 00:07:05.340 19:21:30 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.340 19:21:30 -- event/cpu_locks.sh@81 -- # waitforlisten 63929 /var/tmp/spdk.sock 00:07:05.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.340 19:21:30 -- common/autotest_common.sh@817 -- # '[' -z 63929 ']' 00:07:05.340 19:21:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.340 19:21:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:05.340 19:21:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.340 19:21:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:05.340 19:21:30 -- common/autotest_common.sh@10 -- # set +x 00:07:05.340 [2024-04-24 19:21:30.917469] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:05.340 [2024-04-24 19:21:30.917585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63929 ] 00:07:05.599 [2024-04-24 19:21:31.084670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.857 [2024-04-24 19:21:31.351152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.797 19:21:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:06.797 19:21:32 -- common/autotest_common.sh@850 -- # return 0 00:07:06.797 19:21:32 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63950 00:07:06.797 19:21:32 -- event/cpu_locks.sh@85 -- # waitforlisten 63950 /var/tmp/spdk2.sock 00:07:06.797 19:21:32 -- common/autotest_common.sh@817 -- # '[' -z 63950 ']' 00:07:06.797 19:21:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.797 19:21:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:06.797 19:21:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.797 19:21:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:06.797 19:21:32 -- common/autotest_common.sh@10 -- # set +x 00:07:06.797 19:21:32 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:06.797 [2024-04-24 19:21:32.423389] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:06.797 [2024-04-24 19:21:32.423524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63950 ] 00:07:07.057 [2024-04-24 19:21:32.579895] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.057 [2024-04-24 19:21:32.579951] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.626 [2024-04-24 19:21:33.087804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.533 19:21:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:09.533 19:21:35 -- common/autotest_common.sh@850 -- # return 0 00:07:09.533 19:21:35 -- event/cpu_locks.sh@87 -- # locks_exist 63929 00:07:09.533 19:21:35 -- event/cpu_locks.sh@22 -- # lslocks -p 63929 00:07:09.533 19:21:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.102 19:21:35 -- event/cpu_locks.sh@89 -- # killprocess 63929 00:07:10.102 19:21:35 -- common/autotest_common.sh@936 -- # '[' -z 63929 ']' 00:07:10.102 19:21:35 -- common/autotest_common.sh@940 -- # kill -0 63929 00:07:10.102 19:21:35 -- common/autotest_common.sh@941 -- # uname 00:07:10.102 19:21:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:10.102 19:21:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63929 00:07:10.102 killing process with pid 63929 00:07:10.102 19:21:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:10.102 19:21:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:10.102 19:21:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63929' 00:07:10.102 19:21:35 -- common/autotest_common.sh@955 -- # kill 63929 00:07:10.102 19:21:35 -- common/autotest_common.sh@960 -- # wait 63929 00:07:15.374 19:21:40 -- event/cpu_locks.sh@90 -- # killprocess 63950 00:07:15.374 19:21:40 -- common/autotest_common.sh@936 -- # '[' -z 63950 ']' 00:07:15.374 19:21:40 -- common/autotest_common.sh@940 -- # kill -0 63950 00:07:15.374 19:21:40 -- common/autotest_common.sh@941 -- # uname 00:07:15.374 19:21:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:15.374 19:21:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63950 00:07:15.374 killing process with pid 63950 00:07:15.374 19:21:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:15.374 19:21:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:15.374 19:21:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63950' 00:07:15.374 19:21:40 -- common/autotest_common.sh@955 -- # kill 63950 00:07:15.374 19:21:40 -- common/autotest_common.sh@960 -- # wait 63950 00:07:17.913 ************************************ 00:07:17.913 END TEST non_locking_app_on_locked_coremask 00:07:17.913 ************************************ 00:07:17.913 00:07:17.913 real 0m12.484s 00:07:17.913 user 0m12.690s 00:07:17.913 sys 0m1.199s 00:07:17.913 19:21:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.913 19:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:17.913 19:21:43 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:17.913 19:21:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:17.913 19:21:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.913 19:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:17.913 ************************************ 00:07:17.913 START TEST locking_app_on_unlocked_coremask 00:07:17.913 ************************************ 00:07:17.913 19:21:43 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:07:17.913 19:21:43 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=64108 00:07:17.913 19:21:43 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:17.913 19:21:43 -- event/cpu_locks.sh@99 -- # waitforlisten 64108 /var/tmp/spdk.sock 00:07:17.913 19:21:43 -- common/autotest_common.sh@817 -- # '[' -z 64108 ']' 00:07:17.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.913 19:21:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.913 19:21:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:17.913 19:21:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.913 19:21:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:17.913 19:21:43 -- common/autotest_common.sh@10 -- # set +x 00:07:17.913 [2024-04-24 19:21:43.522899] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:17.913 [2024-04-24 19:21:43.523018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64108 ] 00:07:18.172 [2024-04-24 19:21:43.689925] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.172 [2024-04-24 19:21:43.689986] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.431 [2024-04-24 19:21:43.953963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.366 19:21:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:19.366 19:21:44 -- common/autotest_common.sh@850 -- # return 0 00:07:19.366 19:21:44 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=64130 00:07:19.366 19:21:44 -- event/cpu_locks.sh@103 -- # waitforlisten 64130 /var/tmp/spdk2.sock 00:07:19.366 19:21:44 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:19.366 19:21:44 -- common/autotest_common.sh@817 -- # '[' -z 64130 ']' 00:07:19.366 19:21:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.366 19:21:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:19.366 19:21:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.366 19:21:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:19.366 19:21:44 -- common/autotest_common.sh@10 -- # set +x 00:07:19.366 [2024-04-24 19:21:45.014959] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:19.366 [2024-04-24 19:21:45.015084] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64130 ] 00:07:19.625 [2024-04-24 19:21:45.176932] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.193 [2024-04-24 19:21:45.677533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.096 19:21:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:22.096 19:21:47 -- common/autotest_common.sh@850 -- # return 0 00:07:22.096 19:21:47 -- event/cpu_locks.sh@105 -- # locks_exist 64130 00:07:22.096 19:21:47 -- event/cpu_locks.sh@22 -- # lslocks -p 64130 00:07:22.096 19:21:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:22.734 19:21:48 -- event/cpu_locks.sh@107 -- # killprocess 64108 00:07:22.734 19:21:48 -- common/autotest_common.sh@936 -- # '[' -z 64108 ']' 00:07:22.734 19:21:48 -- common/autotest_common.sh@940 -- # kill -0 64108 00:07:22.734 19:21:48 -- common/autotest_common.sh@941 -- # uname 00:07:22.734 19:21:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:22.734 19:21:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64108 00:07:22.734 killing process with pid 64108 00:07:22.734 19:21:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:22.734 19:21:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:22.734 19:21:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64108' 00:07:22.734 19:21:48 -- common/autotest_common.sh@955 -- # kill 64108 00:07:22.734 19:21:48 -- common/autotest_common.sh@960 -- # wait 64108 00:07:28.006 19:21:53 -- event/cpu_locks.sh@108 -- # killprocess 64130 00:07:28.006 19:21:53 -- common/autotest_common.sh@936 -- # '[' -z 64130 ']' 00:07:28.006 19:21:53 -- common/autotest_common.sh@940 -- # kill -0 64130 00:07:28.006 19:21:53 -- common/autotest_common.sh@941 -- # uname 00:07:28.006 19:21:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:28.006 19:21:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64130 00:07:28.006 killing process with pid 64130 00:07:28.006 19:21:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:28.006 19:21:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:28.006 19:21:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64130' 00:07:28.006 19:21:53 -- common/autotest_common.sh@955 -- # kill 64130 00:07:28.006 19:21:53 -- common/autotest_common.sh@960 -- # wait 64130 00:07:30.540 ************************************ 00:07:30.540 END TEST locking_app_on_unlocked_coremask 00:07:30.540 ************************************ 00:07:30.540 00:07:30.540 real 0m12.473s 00:07:30.540 user 0m12.731s 00:07:30.540 sys 0m1.216s 00:07:30.540 19:21:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:30.540 19:21:55 -- common/autotest_common.sh@10 -- # set +x 00:07:30.540 19:21:55 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:30.540 19:21:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:30.540 19:21:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.540 19:21:55 -- common/autotest_common.sh@10 -- # set +x 00:07:30.540 ************************************ 00:07:30.540 START TEST locking_app_on_locked_coremask 00:07:30.540 ************************************ 00:07:30.540 19:21:56 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:07:30.540 19:21:56 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64290 00:07:30.540 19:21:56 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.540 19:21:56 -- event/cpu_locks.sh@116 -- # waitforlisten 64290 /var/tmp/spdk.sock 00:07:30.540 19:21:56 -- common/autotest_common.sh@817 -- # '[' -z 64290 ']' 00:07:30.540 19:21:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.540 19:21:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:30.540 19:21:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.540 19:21:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:30.540 19:21:56 -- common/autotest_common.sh@10 -- # set +x 00:07:30.540 [2024-04-24 19:21:56.136163] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:30.540 [2024-04-24 19:21:56.136271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64290 ] 00:07:30.798 [2024-04-24 19:21:56.300760] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.057 [2024-04-24 19:21:56.556968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.001 19:21:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:32.001 19:21:57 -- common/autotest_common.sh@850 -- # return 0 00:07:32.001 19:21:57 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64312 00:07:32.001 19:21:57 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64312 /var/tmp/spdk2.sock 00:07:32.001 19:21:57 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:32.001 19:21:57 -- common/autotest_common.sh@638 -- # local es=0 00:07:32.001 19:21:57 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 64312 /var/tmp/spdk2.sock 00:07:32.001 19:21:57 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:32.001 19:21:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:32.001 19:21:57 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:32.001 19:21:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:32.001 19:21:57 -- common/autotest_common.sh@641 -- # waitforlisten 64312 /var/tmp/spdk2.sock 00:07:32.001 19:21:57 -- common/autotest_common.sh@817 -- # '[' -z 64312 ']' 00:07:32.001 19:21:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.001 19:21:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:32.001 19:21:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.001 19:21:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:32.001 19:21:57 -- common/autotest_common.sh@10 -- # set +x 00:07:32.001 [2024-04-24 19:21:57.634225] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:32.001 [2024-04-24 19:21:57.634424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64312 ] 00:07:32.259 [2024-04-24 19:21:57.794995] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64290 has claimed it. 00:07:32.259 [2024-04-24 19:21:57.795073] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:32.825 ERROR: process (pid: 64312) is no longer running 00:07:32.825 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (64312) - No such process 00:07:32.825 19:21:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:32.825 19:21:58 -- common/autotest_common.sh@850 -- # return 1 00:07:32.825 19:21:58 -- common/autotest_common.sh@641 -- # es=1 00:07:32.825 19:21:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:32.825 19:21:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:32.825 19:21:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:32.825 19:21:58 -- event/cpu_locks.sh@122 -- # locks_exist 64290 00:07:32.825 19:21:58 -- event/cpu_locks.sh@22 -- # lslocks -p 64290 00:07:32.825 19:21:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:33.084 19:21:58 -- event/cpu_locks.sh@124 -- # killprocess 64290 00:07:33.084 19:21:58 -- common/autotest_common.sh@936 -- # '[' -z 64290 ']' 00:07:33.084 19:21:58 -- common/autotest_common.sh@940 -- # kill -0 64290 00:07:33.084 19:21:58 -- common/autotest_common.sh@941 -- # uname 00:07:33.084 19:21:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:33.084 19:21:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64290 00:07:33.084 19:21:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:33.084 19:21:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:33.084 19:21:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64290' 00:07:33.084 killing process with pid 64290 00:07:33.084 19:21:58 -- common/autotest_common.sh@955 -- # kill 64290 00:07:33.084 19:21:58 -- common/autotest_common.sh@960 -- # wait 64290 00:07:35.615 00:07:35.615 real 0m5.085s 00:07:35.615 user 0m5.230s 00:07:35.615 sys 0m0.710s 00:07:35.615 19:22:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:35.615 19:22:01 -- common/autotest_common.sh@10 -- # set +x 00:07:35.615 ************************************ 00:07:35.615 END TEST locking_app_on_locked_coremask 00:07:35.615 ************************************ 00:07:35.615 19:22:01 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:35.615 19:22:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:35.615 19:22:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.615 19:22:01 -- common/autotest_common.sh@10 -- # set +x 00:07:35.615 ************************************ 00:07:35.615 START TEST locking_overlapped_coremask 00:07:35.615 ************************************ 00:07:35.615 19:22:01 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:07:35.615 19:22:01 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:35.615 19:22:01 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64385 00:07:35.615 19:22:01 -- event/cpu_locks.sh@133 -- # waitforlisten 64385 /var/tmp/spdk.sock 00:07:35.615 19:22:01 -- common/autotest_common.sh@817 -- # '[' -z 64385 ']' 00:07:35.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.615 19:22:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.615 19:22:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:35.615 19:22:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.615 19:22:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:35.615 19:22:01 -- common/autotest_common.sh@10 -- # set +x 00:07:35.917 [2024-04-24 19:22:01.366295] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:35.917 [2024-04-24 19:22:01.366456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64385 ] 00:07:35.917 [2024-04-24 19:22:01.524601] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.195 [2024-04-24 19:22:01.777455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.195 [2024-04-24 19:22:01.777575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.195 [2024-04-24 19:22:01.777610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.132 19:22:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:37.132 19:22:02 -- common/autotest_common.sh@850 -- # return 0 00:07:37.132 19:22:02 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:37.132 19:22:02 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64409 00:07:37.132 19:22:02 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64409 /var/tmp/spdk2.sock 00:07:37.132 19:22:02 -- common/autotest_common.sh@638 -- # local es=0 00:07:37.132 19:22:02 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 64409 /var/tmp/spdk2.sock 00:07:37.132 19:22:02 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:37.132 19:22:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:37.132 19:22:02 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:37.132 19:22:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:37.132 19:22:02 -- common/autotest_common.sh@641 -- # waitforlisten 64409 /var/tmp/spdk2.sock 00:07:37.132 19:22:02 -- common/autotest_common.sh@817 -- # '[' -z 64409 ']' 00:07:37.132 19:22:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.132 19:22:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:37.132 19:22:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.132 19:22:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:37.132 19:22:02 -- common/autotest_common.sh@10 -- # set +x 00:07:37.392 [2024-04-24 19:22:02.886664] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:37.392 [2024-04-24 19:22:02.886870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64409 ] 00:07:37.392 [2024-04-24 19:22:03.051374] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64385 has claimed it. 00:07:37.392 [2024-04-24 19:22:03.051445] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:37.961 ERROR: process (pid: 64409) is no longer running 00:07:37.961 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (64409) - No such process 00:07:37.961 19:22:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:37.961 19:22:03 -- common/autotest_common.sh@850 -- # return 1 00:07:37.961 19:22:03 -- common/autotest_common.sh@641 -- # es=1 00:07:37.961 19:22:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:37.961 19:22:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:37.961 19:22:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:37.961 19:22:03 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:37.961 19:22:03 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:37.961 19:22:03 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:37.961 19:22:03 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:37.961 19:22:03 -- event/cpu_locks.sh@141 -- # killprocess 64385 00:07:37.961 19:22:03 -- common/autotest_common.sh@936 -- # '[' -z 64385 ']' 00:07:37.961 19:22:03 -- common/autotest_common.sh@940 -- # kill -0 64385 00:07:37.961 19:22:03 -- common/autotest_common.sh@941 -- # uname 00:07:37.961 19:22:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:37.961 19:22:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64385 00:07:37.961 19:22:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:37.961 19:22:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:37.961 19:22:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64385' 00:07:37.961 killing process with pid 64385 00:07:37.961 19:22:03 -- common/autotest_common.sh@955 -- # kill 64385 00:07:37.961 19:22:03 -- common/autotest_common.sh@960 -- # wait 64385 00:07:41.312 00:07:41.312 real 0m5.035s 00:07:41.312 user 0m13.159s 00:07:41.312 sys 0m0.540s 00:07:41.312 19:22:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:41.312 19:22:06 -- common/autotest_common.sh@10 -- # set +x 00:07:41.312 ************************************ 00:07:41.312 END TEST locking_overlapped_coremask 00:07:41.312 ************************************ 00:07:41.312 19:22:06 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:41.312 19:22:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:41.312 19:22:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.312 19:22:06 -- common/autotest_common.sh@10 -- # set +x 00:07:41.312 ************************************ 00:07:41.312 START TEST locking_overlapped_coremask_via_rpc 00:07:41.312 ************************************ 00:07:41.312 19:22:06 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:07:41.312 19:22:06 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64484 00:07:41.312 19:22:06 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:41.312 19:22:06 -- event/cpu_locks.sh@149 -- # waitforlisten 64484 /var/tmp/spdk.sock 00:07:41.312 19:22:06 -- common/autotest_common.sh@817 -- # '[' -z 64484 ']' 00:07:41.312 19:22:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.312 19:22:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:41.313 19:22:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.313 19:22:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:41.313 19:22:06 -- common/autotest_common.sh@10 -- # set +x 00:07:41.313 [2024-04-24 19:22:06.548273] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:41.313 [2024-04-24 19:22:06.548378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64484 ] 00:07:41.313 [2024-04-24 19:22:06.709770] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.313 [2024-04-24 19:22:06.709844] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.313 [2024-04-24 19:22:06.977831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.313 [2024-04-24 19:22:06.977976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.313 [2024-04-24 19:22:06.978019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.693 19:22:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:42.693 19:22:08 -- common/autotest_common.sh@850 -- # return 0 00:07:42.693 19:22:08 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64506 00:07:42.693 19:22:08 -- event/cpu_locks.sh@153 -- # waitforlisten 64506 /var/tmp/spdk2.sock 00:07:42.693 19:22:08 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:42.693 19:22:08 -- common/autotest_common.sh@817 -- # '[' -z 64506 ']' 00:07:42.693 19:22:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.693 19:22:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:42.693 19:22:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.693 19:22:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:42.693 19:22:08 -- common/autotest_common.sh@10 -- # set +x 00:07:42.693 [2024-04-24 19:22:08.138508] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:42.693 [2024-04-24 19:22:08.138763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64506 ] 00:07:42.693 [2024-04-24 19:22:08.299963] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:42.693 [2024-04-24 19:22:08.300019] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.260 [2024-04-24 19:22:08.845919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.260 [2024-04-24 19:22:08.845938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.260 [2024-04-24 19:22:08.845950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:45.795 19:22:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:45.795 19:22:10 -- common/autotest_common.sh@850 -- # return 0 00:07:45.795 19:22:10 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:45.795 19:22:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.795 19:22:10 -- common/autotest_common.sh@10 -- # set +x 00:07:45.795 19:22:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:45.795 19:22:11 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:45.795 19:22:11 -- common/autotest_common.sh@638 -- # local es=0 00:07:45.795 19:22:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:45.795 19:22:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:07:45.795 19:22:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:45.795 19:22:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:07:45.795 19:22:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:45.795 19:22:11 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:45.795 19:22:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:45.795 19:22:11 -- common/autotest_common.sh@10 -- # set +x 00:07:45.795 [2024-04-24 19:22:11.013871] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64484 has claimed it. 00:07:45.795 request: 00:07:45.795 { 00:07:45.795 "method": "framework_enable_cpumask_locks", 00:07:45.795 "req_id": 1 00:07:45.795 } 00:07:45.795 Got JSON-RPC error response 00:07:45.795 response: 00:07:45.795 { 00:07:45.795 "code": -32603, 00:07:45.795 "message": "Failed to claim CPU core: 2" 00:07:45.795 } 00:07:45.795 19:22:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:07:45.795 19:22:11 -- common/autotest_common.sh@641 -- # es=1 00:07:45.795 19:22:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:45.795 19:22:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:45.795 19:22:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:45.795 19:22:11 -- event/cpu_locks.sh@158 -- # waitforlisten 64484 /var/tmp/spdk.sock 00:07:45.795 19:22:11 -- common/autotest_common.sh@817 -- # '[' -z 64484 ']' 00:07:45.795 19:22:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.795 19:22:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:45.795 19:22:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.795 19:22:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:45.795 19:22:11 -- common/autotest_common.sh@10 -- # set +x 00:07:45.795 19:22:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:45.795 19:22:11 -- common/autotest_common.sh@850 -- # return 0 00:07:45.795 19:22:11 -- event/cpu_locks.sh@159 -- # waitforlisten 64506 /var/tmp/spdk2.sock 00:07:45.795 19:22:11 -- common/autotest_common.sh@817 -- # '[' -z 64506 ']' 00:07:45.795 19:22:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.795 19:22:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:45.795 19:22:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.795 19:22:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:45.795 19:22:11 -- common/autotest_common.sh@10 -- # set +x 00:07:45.795 19:22:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:45.795 19:22:11 -- common/autotest_common.sh@850 -- # return 0 00:07:45.795 19:22:11 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:45.795 19:22:11 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:45.795 19:22:11 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:45.795 19:22:11 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:45.796 00:07:45.796 real 0m4.984s 00:07:45.796 user 0m1.276s 00:07:45.796 sys 0m0.190s 00:07:45.796 19:22:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:45.796 19:22:11 -- common/autotest_common.sh@10 -- # set +x 00:07:45.796 ************************************ 00:07:45.796 END TEST locking_overlapped_coremask_via_rpc 00:07:45.796 ************************************ 00:07:45.796 19:22:11 -- event/cpu_locks.sh@174 -- # cleanup 00:07:45.796 19:22:11 -- event/cpu_locks.sh@15 -- # [[ -z 64484 ]] 00:07:45.796 19:22:11 -- event/cpu_locks.sh@15 -- # killprocess 64484 00:07:45.796 19:22:11 -- common/autotest_common.sh@936 -- # '[' -z 64484 ']' 00:07:45.796 19:22:11 -- common/autotest_common.sh@940 -- # kill -0 64484 00:07:46.053 19:22:11 -- common/autotest_common.sh@941 -- # uname 00:07:46.053 19:22:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:46.053 19:22:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64484 00:07:46.053 killing process with pid 64484 00:07:46.053 19:22:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:46.053 19:22:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:46.053 19:22:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64484' 00:07:46.053 19:22:11 -- common/autotest_common.sh@955 -- # kill 64484 00:07:46.053 19:22:11 -- common/autotest_common.sh@960 -- # wait 64484 00:07:49.364 19:22:14 -- event/cpu_locks.sh@16 -- # [[ -z 64506 ]] 00:07:49.364 19:22:14 -- event/cpu_locks.sh@16 -- # killprocess 64506 00:07:49.364 19:22:14 -- common/autotest_common.sh@936 -- # '[' -z 64506 ']' 00:07:49.364 19:22:14 -- common/autotest_common.sh@940 -- # kill -0 64506 00:07:49.364 19:22:14 -- common/autotest_common.sh@941 -- # uname 00:07:49.364 19:22:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:49.364 19:22:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64506 00:07:49.364 killing process with pid 64506 00:07:49.364 19:22:14 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:49.364 19:22:14 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:49.364 19:22:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64506' 00:07:49.364 19:22:14 -- common/autotest_common.sh@955 -- # kill 64506 00:07:49.364 19:22:14 -- common/autotest_common.sh@960 -- # wait 64506 00:07:51.282 19:22:16 -- event/cpu_locks.sh@18 -- # rm -f 00:07:51.282 19:22:16 -- event/cpu_locks.sh@1 -- # cleanup 00:07:51.282 Process with pid 64484 is not found 00:07:51.282 19:22:16 -- event/cpu_locks.sh@15 -- # [[ -z 64484 ]] 00:07:51.282 19:22:16 -- event/cpu_locks.sh@15 -- # killprocess 64484 00:07:51.282 19:22:16 -- common/autotest_common.sh@936 -- # '[' -z 64484 ']' 00:07:51.282 19:22:16 -- common/autotest_common.sh@940 -- # kill -0 64484 00:07:51.282 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (64484) - No such process 00:07:51.282 19:22:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 64484 is not found' 00:07:51.282 19:22:16 -- event/cpu_locks.sh@16 -- # [[ -z 64506 ]] 00:07:51.282 19:22:16 -- event/cpu_locks.sh@16 -- # killprocess 64506 00:07:51.282 19:22:16 -- common/autotest_common.sh@936 -- # '[' -z 64506 ']' 00:07:51.282 19:22:16 -- common/autotest_common.sh@940 -- # kill -0 64506 00:07:51.282 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (64506) - No such process 00:07:51.282 Process with pid 64506 is not found 00:07:51.282 19:22:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 64506 is not found' 00:07:51.282 19:22:16 -- event/cpu_locks.sh@18 -- # rm -f 00:07:51.282 00:07:51.282 real 0m55.627s 00:07:51.282 user 1m33.132s 00:07:51.282 sys 0m6.406s 00:07:51.282 19:22:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:51.282 ************************************ 00:07:51.282 END TEST cpu_locks 00:07:51.282 ************************************ 00:07:51.282 19:22:16 -- common/autotest_common.sh@10 -- # set +x 00:07:51.282 ************************************ 00:07:51.282 END TEST event 00:07:51.282 ************************************ 00:07:51.282 00:07:51.282 real 1m29.854s 00:07:51.283 user 2m39.887s 00:07:51.283 sys 0m10.469s 00:07:51.283 19:22:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:51.283 19:22:16 -- common/autotest_common.sh@10 -- # set +x 00:07:51.541 19:22:16 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:51.542 19:22:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:51.542 19:22:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.542 19:22:16 -- common/autotest_common.sh@10 -- # set +x 00:07:51.542 ************************************ 00:07:51.542 START TEST thread 00:07:51.542 ************************************ 00:07:51.542 19:22:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:51.542 * Looking for test storage... 00:07:51.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:51.542 19:22:17 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:51.542 19:22:17 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:51.542 19:22:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.542 19:22:17 -- common/autotest_common.sh@10 -- # set +x 00:07:51.801 ************************************ 00:07:51.801 START TEST thread_poller_perf 00:07:51.801 ************************************ 00:07:51.801 19:22:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:51.801 [2024-04-24 19:22:17.322479] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:51.801 [2024-04-24 19:22:17.322695] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64713 ] 00:07:52.061 [2024-04-24 19:22:17.490005] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.061 [2024-04-24 19:22:17.720715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.061 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:53.971 ====================================== 00:07:53.971 busy:2302222764 (cyc) 00:07:53.971 total_run_count: 383000 00:07:53.971 tsc_hz: 2290000000 (cyc) 00:07:53.971 ====================================== 00:07:53.971 poller_cost: 6011 (cyc), 2624 (nsec) 00:07:53.971 00:07:53.971 real 0m1.893s 00:07:53.971 user 0m1.683s 00:07:53.971 sys 0m0.101s 00:07:53.971 19:22:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:53.971 ************************************ 00:07:53.971 END TEST thread_poller_perf 00:07:53.971 ************************************ 00:07:53.971 19:22:19 -- common/autotest_common.sh@10 -- # set +x 00:07:53.971 19:22:19 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:53.971 19:22:19 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:53.971 19:22:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.971 19:22:19 -- common/autotest_common.sh@10 -- # set +x 00:07:53.971 ************************************ 00:07:53.971 START TEST thread_poller_perf 00:07:53.971 ************************************ 00:07:53.971 19:22:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:53.971 [2024-04-24 19:22:19.351552] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:53.971 [2024-04-24 19:22:19.351694] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64756 ] 00:07:53.971 [2024-04-24 19:22:19.516582] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.236 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:54.236 [2024-04-24 19:22:19.798684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.612 ====================================== 00:07:55.612 busy:2293891514 (cyc) 00:07:55.612 total_run_count: 4613000 00:07:55.612 tsc_hz: 2290000000 (cyc) 00:07:55.612 ====================================== 00:07:55.612 poller_cost: 497 (cyc), 217 (nsec) 00:07:55.612 00:07:55.612 real 0m1.968s 00:07:55.612 user 0m1.759s 00:07:55.612 sys 0m0.100s 00:07:55.612 19:22:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:55.612 ************************************ 00:07:55.612 END TEST thread_poller_perf 00:07:55.612 ************************************ 00:07:55.612 19:22:21 -- common/autotest_common.sh@10 -- # set +x 00:07:55.871 19:22:21 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:55.871 ************************************ 00:07:55.871 END TEST thread 00:07:55.871 ************************************ 00:07:55.871 00:07:55.871 real 0m4.267s 00:07:55.871 user 0m3.594s 00:07:55.871 sys 0m0.439s 00:07:55.871 19:22:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:55.871 19:22:21 -- common/autotest_common.sh@10 -- # set +x 00:07:55.871 19:22:21 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:55.871 19:22:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:55.871 19:22:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:55.871 19:22:21 -- common/autotest_common.sh@10 -- # set +x 00:07:55.871 ************************************ 00:07:55.871 START TEST accel 00:07:55.871 ************************************ 00:07:55.871 19:22:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:56.130 * Looking for test storage... 00:07:56.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:56.130 19:22:21 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:56.130 19:22:21 -- accel/accel.sh@82 -- # get_expected_opcs 00:07:56.130 19:22:21 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:56.130 19:22:21 -- accel/accel.sh@62 -- # spdk_tgt_pid=64845 00:07:56.130 19:22:21 -- accel/accel.sh@63 -- # waitforlisten 64845 00:07:56.130 19:22:21 -- accel/accel.sh@61 -- # build_accel_config 00:07:56.130 19:22:21 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:56.130 19:22:21 -- common/autotest_common.sh@817 -- # '[' -z 64845 ']' 00:07:56.130 19:22:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.130 19:22:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.130 19:22:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.130 19:22:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.130 19:22:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.130 19:22:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:56.130 19:22:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.130 19:22:21 -- accel/accel.sh@40 -- # local IFS=, 00:07:56.130 19:22:21 -- accel/accel.sh@41 -- # jq -r . 00:07:56.130 19:22:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.130 19:22:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:56.130 19:22:21 -- common/autotest_common.sh@10 -- # set +x 00:07:56.130 [2024-04-24 19:22:21.709877] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:07:56.130 [2024-04-24 19:22:21.710093] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64845 ] 00:07:56.388 [2024-04-24 19:22:21.872479] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.645 [2024-04-24 19:22:22.141958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.579 19:22:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:57.579 19:22:23 -- common/autotest_common.sh@850 -- # return 0 00:07:57.579 19:22:23 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:57.579 19:22:23 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:57.579 19:22:23 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:57.579 19:22:23 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:57.579 19:22:23 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:57.579 19:22:23 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:57.579 19:22:23 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:57.579 19:22:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:57.579 19:22:23 -- common/autotest_common.sh@10 -- # set +x 00:07:57.579 19:22:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:57.579 19:22:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # IFS== 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # read -r opc module 00:07:57.579 19:22:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:57.579 19:22:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # IFS== 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # read -r opc module 00:07:57.579 19:22:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:57.579 19:22:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # IFS== 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # read -r opc module 00:07:57.579 19:22:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:57.579 19:22:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # IFS== 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # read -r opc module 00:07:57.579 19:22:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:57.579 19:22:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # IFS== 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # read -r opc module 00:07:57.579 19:22:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:57.579 19:22:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # IFS== 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # read -r opc module 00:07:57.579 19:22:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:57.579 19:22:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # IFS== 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # read -r opc module 00:07:57.579 19:22:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:57.579 19:22:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # IFS== 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # read -r opc module 00:07:57.579 19:22:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:57.579 19:22:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # IFS== 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # read -r opc module 00:07:57.579 19:22:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:57.579 19:22:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # IFS== 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # read -r opc module 00:07:57.579 19:22:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:57.579 19:22:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # IFS== 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # read -r opc module 00:07:57.579 19:22:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:57.579 19:22:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # IFS== 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # read -r opc module 00:07:57.579 19:22:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:57.579 19:22:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # IFS== 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # read -r opc module 00:07:57.579 19:22:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:57.579 19:22:23 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # IFS== 00:07:57.579 19:22:23 -- accel/accel.sh@72 -- # read -r opc module 00:07:57.579 19:22:23 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:57.579 19:22:23 -- accel/accel.sh@75 -- # killprocess 64845 00:07:57.579 19:22:23 -- common/autotest_common.sh@936 -- # '[' -z 64845 ']' 00:07:57.579 19:22:23 -- common/autotest_common.sh@940 -- # kill -0 64845 00:07:57.579 19:22:23 -- common/autotest_common.sh@941 -- # uname 00:07:57.579 19:22:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:57.579 19:22:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64845 00:07:57.837 19:22:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:57.837 19:22:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:57.837 19:22:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64845' 00:07:57.837 killing process with pid 64845 00:07:57.837 19:22:23 -- common/autotest_common.sh@955 -- # kill 64845 00:07:57.837 19:22:23 -- common/autotest_common.sh@960 -- # wait 64845 00:08:00.371 19:22:26 -- accel/accel.sh@76 -- # trap - ERR 00:08:00.371 19:22:26 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:08:00.371 19:22:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:00.371 19:22:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.371 19:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:00.631 19:22:26 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:08:00.631 19:22:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:08:00.631 19:22:26 -- accel/accel.sh@12 -- # build_accel_config 00:08:00.631 19:22:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.631 19:22:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.631 19:22:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.631 19:22:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.631 19:22:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.631 19:22:26 -- accel/accel.sh@40 -- # local IFS=, 00:08:00.631 19:22:26 -- accel/accel.sh@41 -- # jq -r . 00:08:00.631 19:22:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:00.631 19:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:00.631 19:22:26 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:08:00.631 19:22:26 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:00.631 19:22:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.631 19:22:26 -- common/autotest_common.sh@10 -- # set +x 00:08:00.631 ************************************ 00:08:00.631 START TEST accel_missing_filename 00:08:00.631 ************************************ 00:08:00.631 19:22:26 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:08:00.631 19:22:26 -- common/autotest_common.sh@638 -- # local es=0 00:08:00.631 19:22:26 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:08:00.631 19:22:26 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:08:00.631 19:22:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:00.631 19:22:26 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:08:00.631 19:22:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:00.631 19:22:26 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:08:00.631 19:22:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:08:00.631 19:22:26 -- accel/accel.sh@12 -- # build_accel_config 00:08:00.631 19:22:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.631 19:22:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.631 19:22:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.631 19:22:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.631 19:22:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.631 19:22:26 -- accel/accel.sh@40 -- # local IFS=, 00:08:00.631 19:22:26 -- accel/accel.sh@41 -- # jq -r . 00:08:00.890 [2024-04-24 19:22:26.335656] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:00.890 [2024-04-24 19:22:26.335838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64940 ] 00:08:00.890 [2024-04-24 19:22:26.499298] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.149 [2024-04-24 19:22:26.745516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.408 [2024-04-24 19:22:27.031761] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:01.977 [2024-04-24 19:22:27.627968] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:08:02.544 A filename is required. 00:08:02.544 19:22:28 -- common/autotest_common.sh@641 -- # es=234 00:08:02.544 19:22:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:02.544 19:22:28 -- common/autotest_common.sh@650 -- # es=106 00:08:02.544 ************************************ 00:08:02.544 END TEST accel_missing_filename 00:08:02.544 ************************************ 00:08:02.544 19:22:28 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:02.544 19:22:28 -- common/autotest_common.sh@658 -- # es=1 00:08:02.544 19:22:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:02.544 00:08:02.544 real 0m1.831s 00:08:02.544 user 0m1.593s 00:08:02.544 sys 0m0.171s 00:08:02.544 19:22:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:02.544 19:22:28 -- common/autotest_common.sh@10 -- # set +x 00:08:02.544 19:22:28 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:02.544 19:22:28 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:08:02.544 19:22:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.544 19:22:28 -- common/autotest_common.sh@10 -- # set +x 00:08:02.809 ************************************ 00:08:02.809 START TEST accel_compress_verify 00:08:02.809 ************************************ 00:08:02.809 19:22:28 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:02.809 19:22:28 -- common/autotest_common.sh@638 -- # local es=0 00:08:02.809 19:22:28 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:02.809 19:22:28 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:08:02.809 19:22:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:02.809 19:22:28 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:08:02.809 19:22:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:02.809 19:22:28 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:02.809 19:22:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:02.809 19:22:28 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.809 19:22:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:02.810 19:22:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:02.810 19:22:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.810 19:22:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.810 19:22:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:02.810 19:22:28 -- accel/accel.sh@40 -- # local IFS=, 00:08:02.810 19:22:28 -- accel/accel.sh@41 -- # jq -r . 00:08:02.810 [2024-04-24 19:22:28.276300] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:02.810 [2024-04-24 19:22:28.276398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64986 ] 00:08:02.810 [2024-04-24 19:22:28.438408] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.071 [2024-04-24 19:22:28.704990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.330 [2024-04-24 19:22:28.971732] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.899 [2024-04-24 19:22:29.561351] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:08:04.468 00:08:04.468 Compression does not support the verify option, aborting. 00:08:04.468 19:22:30 -- common/autotest_common.sh@641 -- # es=161 00:08:04.468 19:22:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:04.468 19:22:30 -- common/autotest_common.sh@650 -- # es=33 00:08:04.468 19:22:30 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:04.468 19:22:30 -- common/autotest_common.sh@658 -- # es=1 00:08:04.468 19:22:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:04.468 00:08:04.468 real 0m1.808s 00:08:04.468 user 0m1.587s 00:08:04.468 sys 0m0.158s 00:08:04.468 19:22:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:04.468 19:22:30 -- common/autotest_common.sh@10 -- # set +x 00:08:04.468 ************************************ 00:08:04.468 END TEST accel_compress_verify 00:08:04.468 ************************************ 00:08:04.468 19:22:30 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:08:04.468 19:22:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:04.468 19:22:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.468 19:22:30 -- common/autotest_common.sh@10 -- # set +x 00:08:04.727 ************************************ 00:08:04.727 START TEST accel_wrong_workload 00:08:04.727 ************************************ 00:08:04.727 19:22:30 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:08:04.727 19:22:30 -- common/autotest_common.sh@638 -- # local es=0 00:08:04.727 19:22:30 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:08:04.727 19:22:30 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:08:04.727 19:22:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:04.727 19:22:30 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:08:04.727 19:22:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:04.727 19:22:30 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:08:04.727 19:22:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:08:04.727 19:22:30 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.727 19:22:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.727 19:22:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.728 19:22:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.728 19:22:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.728 19:22:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.728 19:22:30 -- accel/accel.sh@40 -- # local IFS=, 00:08:04.728 19:22:30 -- accel/accel.sh@41 -- # jq -r . 00:08:04.728 Unsupported workload type: foobar 00:08:04.728 [2024-04-24 19:22:30.233800] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:08:04.728 accel_perf options: 00:08:04.728 [-h help message] 00:08:04.728 [-q queue depth per core] 00:08:04.728 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:04.728 [-T number of threads per core 00:08:04.728 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:04.728 [-t time in seconds] 00:08:04.728 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:04.728 [ dif_verify, , dif_generate, dif_generate_copy 00:08:04.728 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:04.728 [-l for compress/decompress workloads, name of uncompressed input file 00:08:04.728 [-S for crc32c workload, use this seed value (default 0) 00:08:04.728 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:04.728 [-f for fill workload, use this BYTE value (default 255) 00:08:04.728 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:04.728 [-y verify result if this switch is on] 00:08:04.728 [-a tasks to allocate per core (default: same value as -q)] 00:08:04.728 Can be used to spread operations across a wider range of memory. 00:08:04.728 19:22:30 -- common/autotest_common.sh@641 -- # es=1 00:08:04.728 ************************************ 00:08:04.728 END TEST accel_wrong_workload 00:08:04.728 ************************************ 00:08:04.728 19:22:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:04.728 19:22:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:04.728 19:22:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:04.728 00:08:04.728 real 0m0.086s 00:08:04.728 user 0m0.086s 00:08:04.728 sys 0m0.044s 00:08:04.728 19:22:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:04.728 19:22:30 -- common/autotest_common.sh@10 -- # set +x 00:08:04.728 19:22:30 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:08:04.728 19:22:30 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:08:04.728 19:22:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.728 19:22:30 -- common/autotest_common.sh@10 -- # set +x 00:08:04.728 ************************************ 00:08:04.728 START TEST accel_negative_buffers 00:08:04.728 ************************************ 00:08:04.728 19:22:30 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:08:04.728 19:22:30 -- common/autotest_common.sh@638 -- # local es=0 00:08:04.728 19:22:30 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:08:04.728 19:22:30 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:08:04.728 19:22:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:04.728 19:22:30 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:08:04.728 19:22:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:04.728 19:22:30 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:08:04.728 19:22:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:08:04.728 19:22:30 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.728 19:22:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.728 19:22:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.728 19:22:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.728 19:22:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.728 19:22:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.728 19:22:30 -- accel/accel.sh@40 -- # local IFS=, 00:08:04.728 19:22:30 -- accel/accel.sh@41 -- # jq -r . 00:08:04.987 -x option must be non-negative. 00:08:04.987 [2024-04-24 19:22:30.449938] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:08:04.987 accel_perf options: 00:08:04.987 [-h help message] 00:08:04.987 [-q queue depth per core] 00:08:04.987 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:08:04.987 [-T number of threads per core 00:08:04.987 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:08:04.987 [-t time in seconds] 00:08:04.987 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:08:04.987 [ dif_verify, , dif_generate, dif_generate_copy 00:08:04.987 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:08:04.987 [-l for compress/decompress workloads, name of uncompressed input file 00:08:04.987 [-S for crc32c workload, use this seed value (default 0) 00:08:04.987 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:08:04.987 [-f for fill workload, use this BYTE value (default 255) 00:08:04.987 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:08:04.987 [-y verify result if this switch is on] 00:08:04.987 [-a tasks to allocate per core (default: same value as -q)] 00:08:04.987 Can be used to spread operations across a wider range of memory. 00:08:04.987 19:22:30 -- common/autotest_common.sh@641 -- # es=1 00:08:04.987 19:22:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:04.987 19:22:30 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:04.987 19:22:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:04.987 ************************************ 00:08:04.987 END TEST accel_negative_buffers 00:08:04.987 ************************************ 00:08:04.987 00:08:04.987 real 0m0.089s 00:08:04.987 user 0m0.094s 00:08:04.987 sys 0m0.036s 00:08:04.987 19:22:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:04.987 19:22:30 -- common/autotest_common.sh@10 -- # set +x 00:08:04.987 19:22:30 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:08:04.987 19:22:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:04.987 19:22:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.987 19:22:30 -- common/autotest_common.sh@10 -- # set +x 00:08:04.987 ************************************ 00:08:04.987 START TEST accel_crc32c 00:08:04.987 ************************************ 00:08:04.987 19:22:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:08:04.987 19:22:30 -- accel/accel.sh@16 -- # local accel_opc 00:08:04.987 19:22:30 -- accel/accel.sh@17 -- # local accel_module 00:08:04.987 19:22:30 -- accel/accel.sh@19 -- # IFS=: 00:08:04.987 19:22:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:08:04.987 19:22:30 -- accel/accel.sh@19 -- # read -r var val 00:08:04.987 19:22:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:08:04.987 19:22:30 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.987 19:22:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:04.987 19:22:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:04.987 19:22:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.987 19:22:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.987 19:22:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:04.987 19:22:30 -- accel/accel.sh@40 -- # local IFS=, 00:08:04.987 19:22:30 -- accel/accel.sh@41 -- # jq -r . 00:08:05.246 [2024-04-24 19:22:30.675448] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:05.246 [2024-04-24 19:22:30.675661] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65079 ] 00:08:05.246 [2024-04-24 19:22:30.838464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.506 [2024-04-24 19:22:31.095762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val= 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val= 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val=0x1 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val= 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val= 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val=crc32c 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val=32 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val= 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val=software 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@22 -- # accel_module=software 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val=32 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val=32 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val=1 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val=Yes 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val= 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:05.766 19:22:31 -- accel/accel.sh@20 -- # val= 00:08:05.766 19:22:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # IFS=: 00:08:05.766 19:22:31 -- accel/accel.sh@19 -- # read -r var val 00:08:07.686 19:22:33 -- accel/accel.sh@20 -- # val= 00:08:07.686 19:22:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.686 19:22:33 -- accel/accel.sh@19 -- # IFS=: 00:08:07.686 19:22:33 -- accel/accel.sh@19 -- # read -r var val 00:08:07.686 19:22:33 -- accel/accel.sh@20 -- # val= 00:08:07.686 19:22:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.686 19:22:33 -- accel/accel.sh@19 -- # IFS=: 00:08:07.686 19:22:33 -- accel/accel.sh@19 -- # read -r var val 00:08:07.686 19:22:33 -- accel/accel.sh@20 -- # val= 00:08:07.686 19:22:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.686 19:22:33 -- accel/accel.sh@19 -- # IFS=: 00:08:07.686 19:22:33 -- accel/accel.sh@19 -- # read -r var val 00:08:07.686 19:22:33 -- accel/accel.sh@20 -- # val= 00:08:07.686 19:22:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.686 19:22:33 -- accel/accel.sh@19 -- # IFS=: 00:08:07.686 19:22:33 -- accel/accel.sh@19 -- # read -r var val 00:08:07.686 19:22:33 -- accel/accel.sh@20 -- # val= 00:08:07.687 19:22:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.687 19:22:33 -- accel/accel.sh@19 -- # IFS=: 00:08:07.687 19:22:33 -- accel/accel.sh@19 -- # read -r var val 00:08:07.687 19:22:33 -- accel/accel.sh@20 -- # val= 00:08:07.687 19:22:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.687 19:22:33 -- accel/accel.sh@19 -- # IFS=: 00:08:07.687 19:22:33 -- accel/accel.sh@19 -- # read -r var val 00:08:07.687 19:22:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:07.687 19:22:33 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:07.687 19:22:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.687 00:08:07.687 real 0m2.713s 00:08:07.687 user 0m2.456s 00:08:07.687 sys 0m0.168s 00:08:07.687 19:22:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:07.687 19:22:33 -- common/autotest_common.sh@10 -- # set +x 00:08:07.687 ************************************ 00:08:07.687 END TEST accel_crc32c 00:08:07.687 ************************************ 00:08:07.945 19:22:33 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:08:07.945 19:22:33 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:07.945 19:22:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.945 19:22:33 -- common/autotest_common.sh@10 -- # set +x 00:08:07.945 ************************************ 00:08:07.945 START TEST accel_crc32c_C2 00:08:07.945 ************************************ 00:08:07.945 19:22:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:08:07.945 19:22:33 -- accel/accel.sh@16 -- # local accel_opc 00:08:07.945 19:22:33 -- accel/accel.sh@17 -- # local accel_module 00:08:07.945 19:22:33 -- accel/accel.sh@19 -- # IFS=: 00:08:07.945 19:22:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:08:07.945 19:22:33 -- accel/accel.sh@19 -- # read -r var val 00:08:07.945 19:22:33 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.945 19:22:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.945 19:22:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:07.945 19:22:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:08:07.945 19:22:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:07.945 19:22:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.945 19:22:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.945 19:22:33 -- accel/accel.sh@40 -- # local IFS=, 00:08:07.945 19:22:33 -- accel/accel.sh@41 -- # jq -r . 00:08:07.945 [2024-04-24 19:22:33.532843] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:07.945 [2024-04-24 19:22:33.532945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65135 ] 00:08:08.204 [2024-04-24 19:22:33.698376] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.461 [2024-04-24 19:22:33.951189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.719 19:22:34 -- accel/accel.sh@20 -- # val= 00:08:08.719 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.719 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.719 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.719 19:22:34 -- accel/accel.sh@20 -- # val= 00:08:08.719 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.719 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.719 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.719 19:22:34 -- accel/accel.sh@20 -- # val=0x1 00:08:08.719 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.719 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.719 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.719 19:22:34 -- accel/accel.sh@20 -- # val= 00:08:08.720 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.720 19:22:34 -- accel/accel.sh@20 -- # val= 00:08:08.720 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.720 19:22:34 -- accel/accel.sh@20 -- # val=crc32c 00:08:08.720 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.720 19:22:34 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.720 19:22:34 -- accel/accel.sh@20 -- # val=0 00:08:08.720 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.720 19:22:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:08.720 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.720 19:22:34 -- accel/accel.sh@20 -- # val= 00:08:08.720 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.720 19:22:34 -- accel/accel.sh@20 -- # val=software 00:08:08.720 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.720 19:22:34 -- accel/accel.sh@22 -- # accel_module=software 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.720 19:22:34 -- accel/accel.sh@20 -- # val=32 00:08:08.720 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.720 19:22:34 -- accel/accel.sh@20 -- # val=32 00:08:08.720 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.720 19:22:34 -- accel/accel.sh@20 -- # val=1 00:08:08.720 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.720 19:22:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:08.720 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.720 19:22:34 -- accel/accel.sh@20 -- # val=Yes 00:08:08.720 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.720 19:22:34 -- accel/accel.sh@20 -- # val= 00:08:08.720 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:08.720 19:22:34 -- accel/accel.sh@20 -- # val= 00:08:08.720 19:22:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # IFS=: 00:08:08.720 19:22:34 -- accel/accel.sh@19 -- # read -r var val 00:08:10.622 19:22:36 -- accel/accel.sh@20 -- # val= 00:08:10.622 19:22:36 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.622 19:22:36 -- accel/accel.sh@19 -- # IFS=: 00:08:10.622 19:22:36 -- accel/accel.sh@19 -- # read -r var val 00:08:10.622 19:22:36 -- accel/accel.sh@20 -- # val= 00:08:10.622 19:22:36 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.622 19:22:36 -- accel/accel.sh@19 -- # IFS=: 00:08:10.622 19:22:36 -- accel/accel.sh@19 -- # read -r var val 00:08:10.622 19:22:36 -- accel/accel.sh@20 -- # val= 00:08:10.622 19:22:36 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.622 19:22:36 -- accel/accel.sh@19 -- # IFS=: 00:08:10.622 19:22:36 -- accel/accel.sh@19 -- # read -r var val 00:08:10.622 19:22:36 -- accel/accel.sh@20 -- # val= 00:08:10.622 19:22:36 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.622 19:22:36 -- accel/accel.sh@19 -- # IFS=: 00:08:10.622 19:22:36 -- accel/accel.sh@19 -- # read -r var val 00:08:10.622 19:22:36 -- accel/accel.sh@20 -- # val= 00:08:10.622 19:22:36 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.622 19:22:36 -- accel/accel.sh@19 -- # IFS=: 00:08:10.622 19:22:36 -- accel/accel.sh@19 -- # read -r var val 00:08:10.622 19:22:36 -- accel/accel.sh@20 -- # val= 00:08:10.622 19:22:36 -- accel/accel.sh@21 -- # case "$var" in 00:08:10.622 19:22:36 -- accel/accel.sh@19 -- # IFS=: 00:08:10.622 19:22:36 -- accel/accel.sh@19 -- # read -r var val 00:08:10.622 19:22:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:10.622 19:22:36 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:10.622 19:22:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:10.622 00:08:10.622 real 0m2.758s 00:08:10.622 user 0m2.512s 00:08:10.622 sys 0m0.158s 00:08:10.622 19:22:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:10.622 19:22:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.622 ************************************ 00:08:10.622 END TEST accel_crc32c_C2 00:08:10.622 ************************************ 00:08:10.622 19:22:36 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:10.622 19:22:36 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:10.622 19:22:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:10.622 19:22:36 -- common/autotest_common.sh@10 -- # set +x 00:08:10.880 ************************************ 00:08:10.880 START TEST accel_copy 00:08:10.880 ************************************ 00:08:10.880 19:22:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:08:10.880 19:22:36 -- accel/accel.sh@16 -- # local accel_opc 00:08:10.880 19:22:36 -- accel/accel.sh@17 -- # local accel_module 00:08:10.880 19:22:36 -- accel/accel.sh@19 -- # IFS=: 00:08:10.880 19:22:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:10.880 19:22:36 -- accel/accel.sh@19 -- # read -r var val 00:08:10.880 19:22:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:10.880 19:22:36 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.880 19:22:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:10.880 19:22:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:10.880 19:22:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.880 19:22:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.880 19:22:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:10.880 19:22:36 -- accel/accel.sh@40 -- # local IFS=, 00:08:10.880 19:22:36 -- accel/accel.sh@41 -- # jq -r . 00:08:10.880 [2024-04-24 19:22:36.416682] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:10.880 [2024-04-24 19:22:36.416853] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65186 ] 00:08:11.139 [2024-04-24 19:22:36.578253] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.405 [2024-04-24 19:22:36.832451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val= 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val= 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val=0x1 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val= 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val= 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val=copy 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@23 -- # accel_opc=copy 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val= 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val=software 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@22 -- # accel_module=software 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val=32 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val=32 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val=1 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val=Yes 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val= 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:11.672 19:22:37 -- accel/accel.sh@20 -- # val= 00:08:11.672 19:22:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # IFS=: 00:08:11.672 19:22:37 -- accel/accel.sh@19 -- # read -r var val 00:08:13.584 19:22:39 -- accel/accel.sh@20 -- # val= 00:08:13.584 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.584 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:13.584 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:13.584 19:22:39 -- accel/accel.sh@20 -- # val= 00:08:13.584 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.584 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:13.584 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:13.584 19:22:39 -- accel/accel.sh@20 -- # val= 00:08:13.585 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.585 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:13.585 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:13.585 19:22:39 -- accel/accel.sh@20 -- # val= 00:08:13.585 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.585 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:13.585 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:13.585 19:22:39 -- accel/accel.sh@20 -- # val= 00:08:13.585 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.585 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:13.585 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:13.585 19:22:39 -- accel/accel.sh@20 -- # val= 00:08:13.585 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:13.585 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:13.585 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:13.585 19:22:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:13.585 19:22:39 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:13.585 19:22:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:13.585 00:08:13.585 real 0m2.731s 00:08:13.585 user 0m2.488s 00:08:13.585 sys 0m0.152s 00:08:13.585 19:22:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:13.585 19:22:39 -- common/autotest_common.sh@10 -- # set +x 00:08:13.585 ************************************ 00:08:13.585 END TEST accel_copy 00:08:13.585 ************************************ 00:08:13.585 19:22:39 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:13.585 19:22:39 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:13.585 19:22:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.585 19:22:39 -- common/autotest_common.sh@10 -- # set +x 00:08:13.585 ************************************ 00:08:13.585 START TEST accel_fill 00:08:13.585 ************************************ 00:08:13.585 19:22:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:13.585 19:22:39 -- accel/accel.sh@16 -- # local accel_opc 00:08:13.585 19:22:39 -- accel/accel.sh@17 -- # local accel_module 00:08:13.585 19:22:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:13.585 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:13.585 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:13.585 19:22:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:13.585 19:22:39 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.585 19:22:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:13.585 19:22:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:13.585 19:22:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.585 19:22:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.585 19:22:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:13.585 19:22:39 -- accel/accel.sh@40 -- # local IFS=, 00:08:13.585 19:22:39 -- accel/accel.sh@41 -- # jq -r . 00:08:13.845 [2024-04-24 19:22:39.278170] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:13.845 [2024-04-24 19:22:39.278317] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65236 ] 00:08:13.845 [2024-04-24 19:22:39.443769] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.105 [2024-04-24 19:22:39.679974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val= 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val= 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val=0x1 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val= 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val= 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val=fill 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@23 -- # accel_opc=fill 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val=0x80 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val= 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val=software 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@22 -- # accel_module=software 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val=64 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val=64 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val=1 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val=Yes 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val= 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:14.364 19:22:39 -- accel/accel.sh@20 -- # val= 00:08:14.364 19:22:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # IFS=: 00:08:14.364 19:22:39 -- accel/accel.sh@19 -- # read -r var val 00:08:16.269 19:22:41 -- accel/accel.sh@20 -- # val= 00:08:16.269 19:22:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.269 19:22:41 -- accel/accel.sh@19 -- # IFS=: 00:08:16.269 19:22:41 -- accel/accel.sh@19 -- # read -r var val 00:08:16.269 19:22:41 -- accel/accel.sh@20 -- # val= 00:08:16.269 19:22:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.269 19:22:41 -- accel/accel.sh@19 -- # IFS=: 00:08:16.269 19:22:41 -- accel/accel.sh@19 -- # read -r var val 00:08:16.269 19:22:41 -- accel/accel.sh@20 -- # val= 00:08:16.269 19:22:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.269 19:22:41 -- accel/accel.sh@19 -- # IFS=: 00:08:16.269 19:22:41 -- accel/accel.sh@19 -- # read -r var val 00:08:16.269 19:22:41 -- accel/accel.sh@20 -- # val= 00:08:16.269 19:22:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.269 19:22:41 -- accel/accel.sh@19 -- # IFS=: 00:08:16.269 19:22:41 -- accel/accel.sh@19 -- # read -r var val 00:08:16.269 19:22:41 -- accel/accel.sh@20 -- # val= 00:08:16.269 19:22:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.269 19:22:41 -- accel/accel.sh@19 -- # IFS=: 00:08:16.269 19:22:41 -- accel/accel.sh@19 -- # read -r var val 00:08:16.269 19:22:41 -- accel/accel.sh@20 -- # val= 00:08:16.269 19:22:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:16.269 19:22:41 -- accel/accel.sh@19 -- # IFS=: 00:08:16.269 19:22:41 -- accel/accel.sh@19 -- # read -r var val 00:08:16.269 19:22:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:16.269 19:22:41 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:16.269 19:22:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:16.269 00:08:16.269 real 0m2.707s 00:08:16.269 user 0m2.446s 00:08:16.269 sys 0m0.173s 00:08:16.269 19:22:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:16.269 19:22:41 -- common/autotest_common.sh@10 -- # set +x 00:08:16.269 ************************************ 00:08:16.269 END TEST accel_fill 00:08:16.269 ************************************ 00:08:16.528 19:22:41 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:16.528 19:22:41 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:16.528 19:22:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.528 19:22:41 -- common/autotest_common.sh@10 -- # set +x 00:08:16.528 ************************************ 00:08:16.528 START TEST accel_copy_crc32c 00:08:16.528 ************************************ 00:08:16.528 19:22:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:08:16.528 19:22:42 -- accel/accel.sh@16 -- # local accel_opc 00:08:16.528 19:22:42 -- accel/accel.sh@17 -- # local accel_module 00:08:16.528 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:16.528 19:22:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:16.528 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:16.528 19:22:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:16.528 19:22:42 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.528 19:22:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:16.528 19:22:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:16.528 19:22:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.528 19:22:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.528 19:22:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:16.528 19:22:42 -- accel/accel.sh@40 -- # local IFS=, 00:08:16.528 19:22:42 -- accel/accel.sh@41 -- # jq -r . 00:08:16.528 [2024-04-24 19:22:42.112509] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:16.528 [2024-04-24 19:22:42.112602] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65292 ] 00:08:16.787 [2024-04-24 19:22:42.277336] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.047 [2024-04-24 19:22:42.523625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val= 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val= 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val=0x1 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val= 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val= 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val=0 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val= 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val=software 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@22 -- # accel_module=software 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val=32 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val=32 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val=1 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val=Yes 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val= 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:17.306 19:22:42 -- accel/accel.sh@20 -- # val= 00:08:17.306 19:22:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # IFS=: 00:08:17.306 19:22:42 -- accel/accel.sh@19 -- # read -r var val 00:08:19.211 19:22:44 -- accel/accel.sh@20 -- # val= 00:08:19.211 19:22:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.211 19:22:44 -- accel/accel.sh@19 -- # IFS=: 00:08:19.211 19:22:44 -- accel/accel.sh@19 -- # read -r var val 00:08:19.211 19:22:44 -- accel/accel.sh@20 -- # val= 00:08:19.211 19:22:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.211 19:22:44 -- accel/accel.sh@19 -- # IFS=: 00:08:19.211 19:22:44 -- accel/accel.sh@19 -- # read -r var val 00:08:19.211 19:22:44 -- accel/accel.sh@20 -- # val= 00:08:19.211 19:22:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.211 19:22:44 -- accel/accel.sh@19 -- # IFS=: 00:08:19.212 19:22:44 -- accel/accel.sh@19 -- # read -r var val 00:08:19.212 19:22:44 -- accel/accel.sh@20 -- # val= 00:08:19.212 19:22:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.212 19:22:44 -- accel/accel.sh@19 -- # IFS=: 00:08:19.212 19:22:44 -- accel/accel.sh@19 -- # read -r var val 00:08:19.212 19:22:44 -- accel/accel.sh@20 -- # val= 00:08:19.212 19:22:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.212 19:22:44 -- accel/accel.sh@19 -- # IFS=: 00:08:19.212 19:22:44 -- accel/accel.sh@19 -- # read -r var val 00:08:19.212 19:22:44 -- accel/accel.sh@20 -- # val= 00:08:19.212 19:22:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.212 19:22:44 -- accel/accel.sh@19 -- # IFS=: 00:08:19.212 19:22:44 -- accel/accel.sh@19 -- # read -r var val 00:08:19.212 19:22:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:19.212 19:22:44 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:19.212 19:22:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.212 00:08:19.212 real 0m2.718s 00:08:19.212 user 0m2.471s 00:08:19.212 sys 0m0.160s 00:08:19.212 19:22:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:19.212 19:22:44 -- common/autotest_common.sh@10 -- # set +x 00:08:19.212 ************************************ 00:08:19.212 END TEST accel_copy_crc32c 00:08:19.212 ************************************ 00:08:19.212 19:22:44 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:19.212 19:22:44 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:19.212 19:22:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:19.212 19:22:44 -- common/autotest_common.sh@10 -- # set +x 00:08:19.470 ************************************ 00:08:19.470 START TEST accel_copy_crc32c_C2 00:08:19.470 ************************************ 00:08:19.470 19:22:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:19.470 19:22:44 -- accel/accel.sh@16 -- # local accel_opc 00:08:19.470 19:22:44 -- accel/accel.sh@17 -- # local accel_module 00:08:19.470 19:22:44 -- accel/accel.sh@19 -- # IFS=: 00:08:19.470 19:22:44 -- accel/accel.sh@19 -- # read -r var val 00:08:19.470 19:22:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:19.470 19:22:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:19.470 19:22:44 -- accel/accel.sh@12 -- # build_accel_config 00:08:19.470 19:22:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:19.470 19:22:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:19.470 19:22:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:19.470 19:22:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:19.470 19:22:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:19.470 19:22:44 -- accel/accel.sh@40 -- # local IFS=, 00:08:19.470 19:22:44 -- accel/accel.sh@41 -- # jq -r . 00:08:19.470 [2024-04-24 19:22:44.978000] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:19.470 [2024-04-24 19:22:44.978178] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65343 ] 00:08:19.470 [2024-04-24 19:22:45.140777] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.729 [2024-04-24 19:22:45.387699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val= 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val= 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val=0x1 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val= 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val= 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val=0 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val= 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val=software 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@22 -- # accel_module=software 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val=32 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val=32 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val=1 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val=Yes 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val= 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:19.989 19:22:45 -- accel/accel.sh@20 -- # val= 00:08:19.989 19:22:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # IFS=: 00:08:19.989 19:22:45 -- accel/accel.sh@19 -- # read -r var val 00:08:22.522 19:22:47 -- accel/accel.sh@20 -- # val= 00:08:22.522 19:22:47 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.522 19:22:47 -- accel/accel.sh@19 -- # IFS=: 00:08:22.522 19:22:47 -- accel/accel.sh@19 -- # read -r var val 00:08:22.522 19:22:47 -- accel/accel.sh@20 -- # val= 00:08:22.522 19:22:47 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.522 19:22:47 -- accel/accel.sh@19 -- # IFS=: 00:08:22.522 19:22:47 -- accel/accel.sh@19 -- # read -r var val 00:08:22.522 19:22:47 -- accel/accel.sh@20 -- # val= 00:08:22.522 19:22:47 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.522 19:22:47 -- accel/accel.sh@19 -- # IFS=: 00:08:22.522 19:22:47 -- accel/accel.sh@19 -- # read -r var val 00:08:22.522 19:22:47 -- accel/accel.sh@20 -- # val= 00:08:22.522 19:22:47 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.522 19:22:47 -- accel/accel.sh@19 -- # IFS=: 00:08:22.522 19:22:47 -- accel/accel.sh@19 -- # read -r var val 00:08:22.522 19:22:47 -- accel/accel.sh@20 -- # val= 00:08:22.522 19:22:47 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.522 19:22:47 -- accel/accel.sh@19 -- # IFS=: 00:08:22.522 19:22:47 -- accel/accel.sh@19 -- # read -r var val 00:08:22.522 19:22:47 -- accel/accel.sh@20 -- # val= 00:08:22.522 19:22:47 -- accel/accel.sh@21 -- # case "$var" in 00:08:22.522 19:22:47 -- accel/accel.sh@19 -- # IFS=: 00:08:22.522 19:22:47 -- accel/accel.sh@19 -- # read -r var val 00:08:22.522 19:22:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:22.522 19:22:47 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:22.522 19:22:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.522 00:08:22.522 real 0m2.754s 00:08:22.522 user 0m2.488s 00:08:22.522 sys 0m0.178s 00:08:22.522 19:22:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:22.522 19:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.522 ************************************ 00:08:22.522 END TEST accel_copy_crc32c_C2 00:08:22.522 ************************************ 00:08:22.522 19:22:47 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:22.522 19:22:47 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:22.522 19:22:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.522 19:22:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.522 ************************************ 00:08:22.522 START TEST accel_dualcast 00:08:22.522 ************************************ 00:08:22.522 19:22:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:08:22.522 19:22:47 -- accel/accel.sh@16 -- # local accel_opc 00:08:22.522 19:22:47 -- accel/accel.sh@17 -- # local accel_module 00:08:22.522 19:22:47 -- accel/accel.sh@19 -- # IFS=: 00:08:22.522 19:22:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:22.522 19:22:47 -- accel/accel.sh@19 -- # read -r var val 00:08:22.522 19:22:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:22.522 19:22:47 -- accel/accel.sh@12 -- # build_accel_config 00:08:22.522 19:22:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:22.522 19:22:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:22.522 19:22:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.522 19:22:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.522 19:22:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:22.522 19:22:47 -- accel/accel.sh@40 -- # local IFS=, 00:08:22.522 19:22:47 -- accel/accel.sh@41 -- # jq -r . 00:08:22.522 [2024-04-24 19:22:47.864698] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:22.522 [2024-04-24 19:22:47.864858] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65399 ] 00:08:22.522 [2024-04-24 19:22:48.029387] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.782 [2024-04-24 19:22:48.282026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.040 19:22:48 -- accel/accel.sh@20 -- # val= 00:08:23.040 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.040 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.040 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.040 19:22:48 -- accel/accel.sh@20 -- # val= 00:08:23.040 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.040 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.040 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.040 19:22:48 -- accel/accel.sh@20 -- # val=0x1 00:08:23.040 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.041 19:22:48 -- accel/accel.sh@20 -- # val= 00:08:23.041 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.041 19:22:48 -- accel/accel.sh@20 -- # val= 00:08:23.041 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.041 19:22:48 -- accel/accel.sh@20 -- # val=dualcast 00:08:23.041 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.041 19:22:48 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.041 19:22:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.041 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.041 19:22:48 -- accel/accel.sh@20 -- # val= 00:08:23.041 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.041 19:22:48 -- accel/accel.sh@20 -- # val=software 00:08:23.041 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.041 19:22:48 -- accel/accel.sh@22 -- # accel_module=software 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.041 19:22:48 -- accel/accel.sh@20 -- # val=32 00:08:23.041 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.041 19:22:48 -- accel/accel.sh@20 -- # val=32 00:08:23.041 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.041 19:22:48 -- accel/accel.sh@20 -- # val=1 00:08:23.041 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.041 19:22:48 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.041 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.041 19:22:48 -- accel/accel.sh@20 -- # val=Yes 00:08:23.041 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.041 19:22:48 -- accel/accel.sh@20 -- # val= 00:08:23.041 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:23.041 19:22:48 -- accel/accel.sh@20 -- # val= 00:08:23.041 19:22:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # IFS=: 00:08:23.041 19:22:48 -- accel/accel.sh@19 -- # read -r var val 00:08:24.944 19:22:50 -- accel/accel.sh@20 -- # val= 00:08:24.944 19:22:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.944 19:22:50 -- accel/accel.sh@19 -- # IFS=: 00:08:24.944 19:22:50 -- accel/accel.sh@19 -- # read -r var val 00:08:24.944 19:22:50 -- accel/accel.sh@20 -- # val= 00:08:24.944 19:22:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.944 19:22:50 -- accel/accel.sh@19 -- # IFS=: 00:08:24.944 19:22:50 -- accel/accel.sh@19 -- # read -r var val 00:08:24.944 19:22:50 -- accel/accel.sh@20 -- # val= 00:08:24.944 19:22:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.944 19:22:50 -- accel/accel.sh@19 -- # IFS=: 00:08:24.944 19:22:50 -- accel/accel.sh@19 -- # read -r var val 00:08:24.944 19:22:50 -- accel/accel.sh@20 -- # val= 00:08:24.944 19:22:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.944 19:22:50 -- accel/accel.sh@19 -- # IFS=: 00:08:24.944 19:22:50 -- accel/accel.sh@19 -- # read -r var val 00:08:24.944 19:22:50 -- accel/accel.sh@20 -- # val= 00:08:24.944 19:22:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.944 19:22:50 -- accel/accel.sh@19 -- # IFS=: 00:08:24.944 19:22:50 -- accel/accel.sh@19 -- # read -r var val 00:08:24.944 19:22:50 -- accel/accel.sh@20 -- # val= 00:08:24.945 19:22:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.945 19:22:50 -- accel/accel.sh@19 -- # IFS=: 00:08:24.945 19:22:50 -- accel/accel.sh@19 -- # read -r var val 00:08:24.945 19:22:50 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:24.945 19:22:50 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:24.945 19:22:50 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:24.945 00:08:24.945 real 0m2.743s 00:08:24.945 user 0m2.479s 00:08:24.945 sys 0m0.170s 00:08:24.945 19:22:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:24.945 19:22:50 -- common/autotest_common.sh@10 -- # set +x 00:08:24.945 ************************************ 00:08:24.945 END TEST accel_dualcast 00:08:24.945 ************************************ 00:08:24.945 19:22:50 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:24.945 19:22:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:24.945 19:22:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:24.945 19:22:50 -- common/autotest_common.sh@10 -- # set +x 00:08:25.204 ************************************ 00:08:25.204 START TEST accel_compare 00:08:25.204 ************************************ 00:08:25.204 19:22:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:08:25.204 19:22:50 -- accel/accel.sh@16 -- # local accel_opc 00:08:25.204 19:22:50 -- accel/accel.sh@17 -- # local accel_module 00:08:25.204 19:22:50 -- accel/accel.sh@19 -- # IFS=: 00:08:25.204 19:22:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:25.204 19:22:50 -- accel/accel.sh@19 -- # read -r var val 00:08:25.204 19:22:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:25.204 19:22:50 -- accel/accel.sh@12 -- # build_accel_config 00:08:25.204 19:22:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:25.204 19:22:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:25.204 19:22:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.204 19:22:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.204 19:22:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:25.204 19:22:50 -- accel/accel.sh@40 -- # local IFS=, 00:08:25.204 19:22:50 -- accel/accel.sh@41 -- # jq -r . 00:08:25.204 [2024-04-24 19:22:50.726277] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:25.204 [2024-04-24 19:22:50.726485] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65455 ] 00:08:25.463 [2024-04-24 19:22:50.890743] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.463 [2024-04-24 19:22:51.136189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val= 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val= 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val=0x1 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val= 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val= 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val=compare 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@23 -- # accel_opc=compare 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val= 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val=software 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@22 -- # accel_module=software 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val=32 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val=32 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val=1 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val=Yes 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val= 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:25.722 19:22:51 -- accel/accel.sh@20 -- # val= 00:08:25.722 19:22:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # IFS=: 00:08:25.722 19:22:51 -- accel/accel.sh@19 -- # read -r var val 00:08:28.310 19:22:53 -- accel/accel.sh@20 -- # val= 00:08:28.310 19:22:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.310 19:22:53 -- accel/accel.sh@19 -- # IFS=: 00:08:28.310 19:22:53 -- accel/accel.sh@19 -- # read -r var val 00:08:28.310 19:22:53 -- accel/accel.sh@20 -- # val= 00:08:28.310 19:22:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.310 19:22:53 -- accel/accel.sh@19 -- # IFS=: 00:08:28.310 19:22:53 -- accel/accel.sh@19 -- # read -r var val 00:08:28.310 19:22:53 -- accel/accel.sh@20 -- # val= 00:08:28.310 19:22:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.310 19:22:53 -- accel/accel.sh@19 -- # IFS=: 00:08:28.310 19:22:53 -- accel/accel.sh@19 -- # read -r var val 00:08:28.310 19:22:53 -- accel/accel.sh@20 -- # val= 00:08:28.310 19:22:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.310 19:22:53 -- accel/accel.sh@19 -- # IFS=: 00:08:28.310 19:22:53 -- accel/accel.sh@19 -- # read -r var val 00:08:28.310 19:22:53 -- accel/accel.sh@20 -- # val= 00:08:28.310 19:22:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.310 19:22:53 -- accel/accel.sh@19 -- # IFS=: 00:08:28.310 19:22:53 -- accel/accel.sh@19 -- # read -r var val 00:08:28.310 19:22:53 -- accel/accel.sh@20 -- # val= 00:08:28.310 19:22:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.310 19:22:53 -- accel/accel.sh@19 -- # IFS=: 00:08:28.310 19:22:53 -- accel/accel.sh@19 -- # read -r var val 00:08:28.310 19:22:53 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:28.310 19:22:53 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:08:28.310 19:22:53 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:28.310 00:08:28.310 real 0m2.730s 00:08:28.310 user 0m2.458s 00:08:28.310 sys 0m0.172s 00:08:28.310 19:22:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:28.310 19:22:53 -- common/autotest_common.sh@10 -- # set +x 00:08:28.310 ************************************ 00:08:28.310 END TEST accel_compare 00:08:28.310 ************************************ 00:08:28.310 19:22:53 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:08:28.310 19:22:53 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:28.310 19:22:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.310 19:22:53 -- common/autotest_common.sh@10 -- # set +x 00:08:28.310 ************************************ 00:08:28.310 START TEST accel_xor 00:08:28.310 ************************************ 00:08:28.310 19:22:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:08:28.310 19:22:53 -- accel/accel.sh@16 -- # local accel_opc 00:08:28.310 19:22:53 -- accel/accel.sh@17 -- # local accel_module 00:08:28.310 19:22:53 -- accel/accel.sh@19 -- # IFS=: 00:08:28.310 19:22:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:08:28.310 19:22:53 -- accel/accel.sh@19 -- # read -r var val 00:08:28.310 19:22:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:08:28.310 19:22:53 -- accel/accel.sh@12 -- # build_accel_config 00:08:28.310 19:22:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:28.310 19:22:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:28.310 19:22:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:28.310 19:22:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:28.310 19:22:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:28.310 19:22:53 -- accel/accel.sh@40 -- # local IFS=, 00:08:28.310 19:22:53 -- accel/accel.sh@41 -- # jq -r . 00:08:28.310 [2024-04-24 19:22:53.596371] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:28.310 [2024-04-24 19:22:53.596579] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65500 ] 00:08:28.310 [2024-04-24 19:22:53.760809] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.570 [2024-04-24 19:22:54.016886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.830 19:22:54 -- accel/accel.sh@20 -- # val= 00:08:28.830 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.830 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.830 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.830 19:22:54 -- accel/accel.sh@20 -- # val= 00:08:28.830 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.830 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.830 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.830 19:22:54 -- accel/accel.sh@20 -- # val=0x1 00:08:28.830 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.830 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.830 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.830 19:22:54 -- accel/accel.sh@20 -- # val= 00:08:28.830 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.830 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.830 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.830 19:22:54 -- accel/accel.sh@20 -- # val= 00:08:28.830 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.830 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.830 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.830 19:22:54 -- accel/accel.sh@20 -- # val=xor 00:08:28.830 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.830 19:22:54 -- accel/accel.sh@23 -- # accel_opc=xor 00:08:28.830 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.831 19:22:54 -- accel/accel.sh@20 -- # val=2 00:08:28.831 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.831 19:22:54 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:28.831 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.831 19:22:54 -- accel/accel.sh@20 -- # val= 00:08:28.831 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.831 19:22:54 -- accel/accel.sh@20 -- # val=software 00:08:28.831 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.831 19:22:54 -- accel/accel.sh@22 -- # accel_module=software 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.831 19:22:54 -- accel/accel.sh@20 -- # val=32 00:08:28.831 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.831 19:22:54 -- accel/accel.sh@20 -- # val=32 00:08:28.831 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.831 19:22:54 -- accel/accel.sh@20 -- # val=1 00:08:28.831 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.831 19:22:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:28.831 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.831 19:22:54 -- accel/accel.sh@20 -- # val=Yes 00:08:28.831 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.831 19:22:54 -- accel/accel.sh@20 -- # val= 00:08:28.831 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:28.831 19:22:54 -- accel/accel.sh@20 -- # val= 00:08:28.831 19:22:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # IFS=: 00:08:28.831 19:22:54 -- accel/accel.sh@19 -- # read -r var val 00:08:30.737 19:22:56 -- accel/accel.sh@20 -- # val= 00:08:30.737 19:22:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.737 19:22:56 -- accel/accel.sh@19 -- # IFS=: 00:08:30.737 19:22:56 -- accel/accel.sh@19 -- # read -r var val 00:08:30.737 19:22:56 -- accel/accel.sh@20 -- # val= 00:08:30.737 19:22:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.737 19:22:56 -- accel/accel.sh@19 -- # IFS=: 00:08:30.737 19:22:56 -- accel/accel.sh@19 -- # read -r var val 00:08:30.737 19:22:56 -- accel/accel.sh@20 -- # val= 00:08:30.737 19:22:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.737 19:22:56 -- accel/accel.sh@19 -- # IFS=: 00:08:30.737 19:22:56 -- accel/accel.sh@19 -- # read -r var val 00:08:30.737 19:22:56 -- accel/accel.sh@20 -- # val= 00:08:30.737 19:22:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.737 19:22:56 -- accel/accel.sh@19 -- # IFS=: 00:08:30.737 19:22:56 -- accel/accel.sh@19 -- # read -r var val 00:08:30.737 19:22:56 -- accel/accel.sh@20 -- # val= 00:08:30.737 19:22:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.737 19:22:56 -- accel/accel.sh@19 -- # IFS=: 00:08:30.737 19:22:56 -- accel/accel.sh@19 -- # read -r var val 00:08:30.737 19:22:56 -- accel/accel.sh@20 -- # val= 00:08:30.737 19:22:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.737 19:22:56 -- accel/accel.sh@19 -- # IFS=: 00:08:30.737 19:22:56 -- accel/accel.sh@19 -- # read -r var val 00:08:30.737 19:22:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:30.737 19:22:56 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:30.737 19:22:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:30.737 00:08:30.737 real 0m2.775s 00:08:30.737 user 0m2.513s 00:08:30.737 sys 0m0.174s 00:08:30.737 19:22:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:30.737 19:22:56 -- common/autotest_common.sh@10 -- # set +x 00:08:30.737 ************************************ 00:08:30.737 END TEST accel_xor 00:08:30.737 ************************************ 00:08:30.737 19:22:56 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:30.737 19:22:56 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:30.737 19:22:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:30.737 19:22:56 -- common/autotest_common.sh@10 -- # set +x 00:08:31.024 ************************************ 00:08:31.024 START TEST accel_xor 00:08:31.024 ************************************ 00:08:31.024 19:22:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:08:31.025 19:22:56 -- accel/accel.sh@16 -- # local accel_opc 00:08:31.025 19:22:56 -- accel/accel.sh@17 -- # local accel_module 00:08:31.025 19:22:56 -- accel/accel.sh@19 -- # IFS=: 00:08:31.025 19:22:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:31.025 19:22:56 -- accel/accel.sh@19 -- # read -r var val 00:08:31.025 19:22:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:31.025 19:22:56 -- accel/accel.sh@12 -- # build_accel_config 00:08:31.025 19:22:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.025 19:22:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.025 19:22:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.025 19:22:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.025 19:22:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.025 19:22:56 -- accel/accel.sh@40 -- # local IFS=, 00:08:31.025 19:22:56 -- accel/accel.sh@41 -- # jq -r . 00:08:31.025 [2024-04-24 19:22:56.510039] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:31.025 [2024-04-24 19:22:56.510232] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65557 ] 00:08:31.025 [2024-04-24 19:22:56.672060] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.297 [2024-04-24 19:22:56.915599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.556 19:22:57 -- accel/accel.sh@20 -- # val= 00:08:31.556 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.556 19:22:57 -- accel/accel.sh@20 -- # val= 00:08:31.556 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.556 19:22:57 -- accel/accel.sh@20 -- # val=0x1 00:08:31.556 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.556 19:22:57 -- accel/accel.sh@20 -- # val= 00:08:31.556 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.556 19:22:57 -- accel/accel.sh@20 -- # val= 00:08:31.556 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.556 19:22:57 -- accel/accel.sh@20 -- # val=xor 00:08:31.556 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.556 19:22:57 -- accel/accel.sh@23 -- # accel_opc=xor 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.556 19:22:57 -- accel/accel.sh@20 -- # val=3 00:08:31.556 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.556 19:22:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:31.556 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.556 19:22:57 -- accel/accel.sh@20 -- # val= 00:08:31.556 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.556 19:22:57 -- accel/accel.sh@20 -- # val=software 00:08:31.556 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.556 19:22:57 -- accel/accel.sh@22 -- # accel_module=software 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.556 19:22:57 -- accel/accel.sh@20 -- # val=32 00:08:31.556 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.556 19:22:57 -- accel/accel.sh@20 -- # val=32 00:08:31.556 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.556 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.556 19:22:57 -- accel/accel.sh@20 -- # val=1 00:08:31.556 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.557 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.557 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.557 19:22:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:31.557 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.557 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.557 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.557 19:22:57 -- accel/accel.sh@20 -- # val=Yes 00:08:31.557 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.557 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.557 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.557 19:22:57 -- accel/accel.sh@20 -- # val= 00:08:31.557 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.557 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.557 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:31.557 19:22:57 -- accel/accel.sh@20 -- # val= 00:08:31.557 19:22:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.557 19:22:57 -- accel/accel.sh@19 -- # IFS=: 00:08:31.557 19:22:57 -- accel/accel.sh@19 -- # read -r var val 00:08:34.094 19:22:59 -- accel/accel.sh@20 -- # val= 00:08:34.094 19:22:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.094 19:22:59 -- accel/accel.sh@19 -- # IFS=: 00:08:34.094 19:22:59 -- accel/accel.sh@19 -- # read -r var val 00:08:34.094 19:22:59 -- accel/accel.sh@20 -- # val= 00:08:34.094 19:22:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.094 19:22:59 -- accel/accel.sh@19 -- # IFS=: 00:08:34.094 19:22:59 -- accel/accel.sh@19 -- # read -r var val 00:08:34.094 19:22:59 -- accel/accel.sh@20 -- # val= 00:08:34.094 19:22:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.094 19:22:59 -- accel/accel.sh@19 -- # IFS=: 00:08:34.094 19:22:59 -- accel/accel.sh@19 -- # read -r var val 00:08:34.094 19:22:59 -- accel/accel.sh@20 -- # val= 00:08:34.094 19:22:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.094 19:22:59 -- accel/accel.sh@19 -- # IFS=: 00:08:34.094 19:22:59 -- accel/accel.sh@19 -- # read -r var val 00:08:34.094 19:22:59 -- accel/accel.sh@20 -- # val= 00:08:34.094 19:22:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.094 19:22:59 -- accel/accel.sh@19 -- # IFS=: 00:08:34.094 19:22:59 -- accel/accel.sh@19 -- # read -r var val 00:08:34.094 19:22:59 -- accel/accel.sh@20 -- # val= 00:08:34.094 19:22:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.094 19:22:59 -- accel/accel.sh@19 -- # IFS=: 00:08:34.094 19:22:59 -- accel/accel.sh@19 -- # read -r var val 00:08:34.094 19:22:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:34.094 19:22:59 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:34.094 19:22:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:34.094 00:08:34.094 real 0m2.774s 00:08:34.094 user 0m2.509s 00:08:34.094 sys 0m0.174s 00:08:34.094 19:22:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:34.094 19:22:59 -- common/autotest_common.sh@10 -- # set +x 00:08:34.094 ************************************ 00:08:34.094 END TEST accel_xor 00:08:34.094 ************************************ 00:08:34.094 19:22:59 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:34.094 19:22:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:34.094 19:22:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.094 19:22:59 -- common/autotest_common.sh@10 -- # set +x 00:08:34.094 ************************************ 00:08:34.094 START TEST accel_dif_verify 00:08:34.094 ************************************ 00:08:34.094 19:22:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:08:34.094 19:22:59 -- accel/accel.sh@16 -- # local accel_opc 00:08:34.094 19:22:59 -- accel/accel.sh@17 -- # local accel_module 00:08:34.094 19:22:59 -- accel/accel.sh@19 -- # IFS=: 00:08:34.094 19:22:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:34.094 19:22:59 -- accel/accel.sh@19 -- # read -r var val 00:08:34.094 19:22:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:34.094 19:22:59 -- accel/accel.sh@12 -- # build_accel_config 00:08:34.094 19:22:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:34.094 19:22:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:34.094 19:22:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:34.094 19:22:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:34.094 19:22:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:34.094 19:22:59 -- accel/accel.sh@40 -- # local IFS=, 00:08:34.094 19:22:59 -- accel/accel.sh@41 -- # jq -r . 00:08:34.094 [2024-04-24 19:22:59.430177] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:34.094 [2024-04-24 19:22:59.430398] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65615 ] 00:08:34.094 [2024-04-24 19:22:59.577587] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.354 [2024-04-24 19:22:59.827804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val= 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val= 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val=0x1 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val= 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val= 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val=dif_verify 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val='512 bytes' 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val='8 bytes' 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val= 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val=software 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@22 -- # accel_module=software 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val=32 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val=32 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val=1 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val=No 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val= 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:34.613 19:23:00 -- accel/accel.sh@20 -- # val= 00:08:34.613 19:23:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # IFS=: 00:08:34.613 19:23:00 -- accel/accel.sh@19 -- # read -r var val 00:08:36.515 19:23:02 -- accel/accel.sh@20 -- # val= 00:08:36.515 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.515 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:36.515 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:36.515 19:23:02 -- accel/accel.sh@20 -- # val= 00:08:36.515 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.515 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:36.515 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:36.515 19:23:02 -- accel/accel.sh@20 -- # val= 00:08:36.515 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.515 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:36.515 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:36.515 19:23:02 -- accel/accel.sh@20 -- # val= 00:08:36.515 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.515 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:36.515 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:36.515 19:23:02 -- accel/accel.sh@20 -- # val= 00:08:36.515 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.515 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:36.515 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:36.515 19:23:02 -- accel/accel.sh@20 -- # val= 00:08:36.515 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:36.515 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:36.515 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:36.515 19:23:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:36.515 19:23:02 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:36.515 19:23:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:36.515 00:08:36.515 real 0m2.730s 00:08:36.515 user 0m2.470s 00:08:36.515 sys 0m0.171s 00:08:36.515 19:23:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:36.515 19:23:02 -- common/autotest_common.sh@10 -- # set +x 00:08:36.515 ************************************ 00:08:36.515 END TEST accel_dif_verify 00:08:36.515 ************************************ 00:08:36.515 19:23:02 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:36.515 19:23:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:36.515 19:23:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:36.515 19:23:02 -- common/autotest_common.sh@10 -- # set +x 00:08:36.775 ************************************ 00:08:36.775 START TEST accel_dif_generate 00:08:36.775 ************************************ 00:08:36.775 19:23:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:08:36.775 19:23:02 -- accel/accel.sh@16 -- # local accel_opc 00:08:36.775 19:23:02 -- accel/accel.sh@17 -- # local accel_module 00:08:36.775 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:36.775 19:23:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:36.775 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:36.775 19:23:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:36.775 19:23:02 -- accel/accel.sh@12 -- # build_accel_config 00:08:36.775 19:23:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:36.775 19:23:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:36.775 19:23:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:36.775 19:23:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.775 19:23:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:36.775 19:23:02 -- accel/accel.sh@40 -- # local IFS=, 00:08:36.775 19:23:02 -- accel/accel.sh@41 -- # jq -r . 00:08:36.775 [2024-04-24 19:23:02.302842] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:36.775 [2024-04-24 19:23:02.303032] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65666 ] 00:08:37.032 [2024-04-24 19:23:02.470200] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.290 [2024-04-24 19:23:02.715874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.546 19:23:02 -- accel/accel.sh@20 -- # val= 00:08:37.546 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.546 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.546 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.546 19:23:02 -- accel/accel.sh@20 -- # val= 00:08:37.546 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.546 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.546 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.546 19:23:02 -- accel/accel.sh@20 -- # val=0x1 00:08:37.546 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.546 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.546 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.546 19:23:02 -- accel/accel.sh@20 -- # val= 00:08:37.546 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.546 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.546 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.546 19:23:02 -- accel/accel.sh@20 -- # val= 00:08:37.546 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.546 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.546 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.546 19:23:02 -- accel/accel.sh@20 -- # val=dif_generate 00:08:37.546 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.546 19:23:02 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:37.546 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.546 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.546 19:23:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:37.546 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.547 19:23:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:37.547 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.547 19:23:02 -- accel/accel.sh@20 -- # val='512 bytes' 00:08:37.547 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.547 19:23:02 -- accel/accel.sh@20 -- # val='8 bytes' 00:08:37.547 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.547 19:23:02 -- accel/accel.sh@20 -- # val= 00:08:37.547 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.547 19:23:02 -- accel/accel.sh@20 -- # val=software 00:08:37.547 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.547 19:23:02 -- accel/accel.sh@22 -- # accel_module=software 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.547 19:23:02 -- accel/accel.sh@20 -- # val=32 00:08:37.547 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.547 19:23:02 -- accel/accel.sh@20 -- # val=32 00:08:37.547 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.547 19:23:02 -- accel/accel.sh@20 -- # val=1 00:08:37.547 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.547 19:23:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:37.547 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.547 19:23:02 -- accel/accel.sh@20 -- # val=No 00:08:37.547 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.547 19:23:02 -- accel/accel.sh@20 -- # val= 00:08:37.547 19:23:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # IFS=: 00:08:37.547 19:23:02 -- accel/accel.sh@19 -- # read -r var val 00:08:37.547 19:23:02 -- accel/accel.sh@20 -- # val= 00:08:37.547 19:23:03 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.547 19:23:03 -- accel/accel.sh@19 -- # IFS=: 00:08:37.547 19:23:03 -- accel/accel.sh@19 -- # read -r var val 00:08:39.445 19:23:04 -- accel/accel.sh@20 -- # val= 00:08:39.445 19:23:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.445 19:23:04 -- accel/accel.sh@19 -- # IFS=: 00:08:39.445 19:23:04 -- accel/accel.sh@19 -- # read -r var val 00:08:39.445 19:23:04 -- accel/accel.sh@20 -- # val= 00:08:39.445 19:23:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.445 19:23:04 -- accel/accel.sh@19 -- # IFS=: 00:08:39.445 19:23:04 -- accel/accel.sh@19 -- # read -r var val 00:08:39.445 19:23:04 -- accel/accel.sh@20 -- # val= 00:08:39.445 19:23:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.445 19:23:04 -- accel/accel.sh@19 -- # IFS=: 00:08:39.445 19:23:04 -- accel/accel.sh@19 -- # read -r var val 00:08:39.445 19:23:04 -- accel/accel.sh@20 -- # val= 00:08:39.445 19:23:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.445 19:23:04 -- accel/accel.sh@19 -- # IFS=: 00:08:39.445 19:23:04 -- accel/accel.sh@19 -- # read -r var val 00:08:39.445 19:23:04 -- accel/accel.sh@20 -- # val= 00:08:39.445 19:23:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.445 19:23:04 -- accel/accel.sh@19 -- # IFS=: 00:08:39.445 19:23:04 -- accel/accel.sh@19 -- # read -r var val 00:08:39.445 19:23:04 -- accel/accel.sh@20 -- # val= 00:08:39.445 19:23:04 -- accel/accel.sh@21 -- # case "$var" in 00:08:39.445 19:23:04 -- accel/accel.sh@19 -- # IFS=: 00:08:39.445 19:23:04 -- accel/accel.sh@19 -- # read -r var val 00:08:39.445 19:23:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:39.445 19:23:04 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:39.445 19:23:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:39.445 00:08:39.445 real 0m2.703s 00:08:39.445 user 0m2.464s 00:08:39.445 sys 0m0.152s 00:08:39.445 19:23:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:39.446 19:23:04 -- common/autotest_common.sh@10 -- # set +x 00:08:39.446 ************************************ 00:08:39.446 END TEST accel_dif_generate 00:08:39.446 ************************************ 00:08:39.446 19:23:04 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:39.446 19:23:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:39.446 19:23:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:39.446 19:23:05 -- common/autotest_common.sh@10 -- # set +x 00:08:39.446 ************************************ 00:08:39.446 START TEST accel_dif_generate_copy 00:08:39.446 ************************************ 00:08:39.446 19:23:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:08:39.446 19:23:05 -- accel/accel.sh@16 -- # local accel_opc 00:08:39.446 19:23:05 -- accel/accel.sh@17 -- # local accel_module 00:08:39.446 19:23:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:39.446 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:39.446 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:39.446 19:23:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:39.446 19:23:05 -- accel/accel.sh@12 -- # build_accel_config 00:08:39.446 19:23:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:39.446 19:23:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:39.446 19:23:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:39.446 19:23:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:39.446 19:23:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:39.446 19:23:05 -- accel/accel.sh@40 -- # local IFS=, 00:08:39.446 19:23:05 -- accel/accel.sh@41 -- # jq -r . 00:08:39.711 [2024-04-24 19:23:05.133306] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:39.711 [2024-04-24 19:23:05.133557] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65722 ] 00:08:39.712 [2024-04-24 19:23:05.312105] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.971 [2024-04-24 19:23:05.570609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.229 19:23:05 -- accel/accel.sh@20 -- # val= 00:08:40.229 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.229 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.229 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.229 19:23:05 -- accel/accel.sh@20 -- # val= 00:08:40.229 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val=0x1 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val= 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val= 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val= 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val=software 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@22 -- # accel_module=software 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val=32 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val=32 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val=1 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val=No 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val= 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:40.230 19:23:05 -- accel/accel.sh@20 -- # val= 00:08:40.230 19:23:05 -- accel/accel.sh@21 -- # case "$var" in 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # IFS=: 00:08:40.230 19:23:05 -- accel/accel.sh@19 -- # read -r var val 00:08:42.763 19:23:07 -- accel/accel.sh@20 -- # val= 00:08:42.763 19:23:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:42.763 19:23:07 -- accel/accel.sh@19 -- # IFS=: 00:08:42.763 19:23:07 -- accel/accel.sh@19 -- # read -r var val 00:08:42.763 19:23:07 -- accel/accel.sh@20 -- # val= 00:08:42.763 19:23:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:42.763 19:23:07 -- accel/accel.sh@19 -- # IFS=: 00:08:42.763 19:23:07 -- accel/accel.sh@19 -- # read -r var val 00:08:42.763 19:23:07 -- accel/accel.sh@20 -- # val= 00:08:42.763 19:23:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:42.763 19:23:07 -- accel/accel.sh@19 -- # IFS=: 00:08:42.763 19:23:07 -- accel/accel.sh@19 -- # read -r var val 00:08:42.763 19:23:07 -- accel/accel.sh@20 -- # val= 00:08:42.763 19:23:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:42.763 19:23:07 -- accel/accel.sh@19 -- # IFS=: 00:08:42.763 19:23:07 -- accel/accel.sh@19 -- # read -r var val 00:08:42.763 19:23:07 -- accel/accel.sh@20 -- # val= 00:08:42.763 19:23:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:42.763 19:23:07 -- accel/accel.sh@19 -- # IFS=: 00:08:42.763 19:23:07 -- accel/accel.sh@19 -- # read -r var val 00:08:42.763 19:23:07 -- accel/accel.sh@20 -- # val= 00:08:42.763 19:23:07 -- accel/accel.sh@21 -- # case "$var" in 00:08:42.763 19:23:07 -- accel/accel.sh@19 -- # IFS=: 00:08:42.763 19:23:07 -- accel/accel.sh@19 -- # read -r var val 00:08:42.763 19:23:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:42.763 19:23:07 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:42.763 19:23:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:42.763 00:08:42.763 real 0m2.755s 00:08:42.763 user 0m2.491s 00:08:42.763 sys 0m0.176s 00:08:42.763 19:23:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:42.763 19:23:07 -- common/autotest_common.sh@10 -- # set +x 00:08:42.763 ************************************ 00:08:42.763 END TEST accel_dif_generate_copy 00:08:42.763 ************************************ 00:08:42.763 19:23:07 -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:42.763 19:23:07 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:42.763 19:23:07 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:42.763 19:23:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:42.763 19:23:07 -- common/autotest_common.sh@10 -- # set +x 00:08:42.763 ************************************ 00:08:42.763 START TEST accel_comp 00:08:42.763 ************************************ 00:08:42.763 19:23:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:42.763 19:23:07 -- accel/accel.sh@16 -- # local accel_opc 00:08:42.763 19:23:07 -- accel/accel.sh@17 -- # local accel_module 00:08:42.763 19:23:07 -- accel/accel.sh@19 -- # IFS=: 00:08:42.763 19:23:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:42.763 19:23:07 -- accel/accel.sh@19 -- # read -r var val 00:08:42.763 19:23:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:42.763 19:23:07 -- accel/accel.sh@12 -- # build_accel_config 00:08:42.763 19:23:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:42.763 19:23:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:42.763 19:23:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:42.763 19:23:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:42.763 19:23:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:42.763 19:23:07 -- accel/accel.sh@40 -- # local IFS=, 00:08:42.763 19:23:07 -- accel/accel.sh@41 -- # jq -r . 00:08:42.763 [2024-04-24 19:23:08.034046] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:42.763 [2024-04-24 19:23:08.034223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65778 ] 00:08:42.763 [2024-04-24 19:23:08.197506] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.023 [2024-04-24 19:23:08.444098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.023 19:23:08 -- accel/accel.sh@20 -- # val= 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val= 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val= 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val=0x1 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val= 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val= 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val=compress 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@23 -- # accel_opc=compress 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val= 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val=software 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@22 -- # accel_module=software 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val=32 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val=32 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val=1 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val=No 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val= 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:43.314 19:23:08 -- accel/accel.sh@20 -- # val= 00:08:43.314 19:23:08 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # IFS=: 00:08:43.314 19:23:08 -- accel/accel.sh@19 -- # read -r var val 00:08:45.226 19:23:10 -- accel/accel.sh@20 -- # val= 00:08:45.226 19:23:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.226 19:23:10 -- accel/accel.sh@19 -- # IFS=: 00:08:45.226 19:23:10 -- accel/accel.sh@19 -- # read -r var val 00:08:45.226 19:23:10 -- accel/accel.sh@20 -- # val= 00:08:45.226 19:23:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.226 19:23:10 -- accel/accel.sh@19 -- # IFS=: 00:08:45.226 19:23:10 -- accel/accel.sh@19 -- # read -r var val 00:08:45.226 19:23:10 -- accel/accel.sh@20 -- # val= 00:08:45.226 19:23:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.226 19:23:10 -- accel/accel.sh@19 -- # IFS=: 00:08:45.226 19:23:10 -- accel/accel.sh@19 -- # read -r var val 00:08:45.226 19:23:10 -- accel/accel.sh@20 -- # val= 00:08:45.226 19:23:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.226 19:23:10 -- accel/accel.sh@19 -- # IFS=: 00:08:45.226 19:23:10 -- accel/accel.sh@19 -- # read -r var val 00:08:45.226 19:23:10 -- accel/accel.sh@20 -- # val= 00:08:45.226 19:23:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.226 19:23:10 -- accel/accel.sh@19 -- # IFS=: 00:08:45.226 19:23:10 -- accel/accel.sh@19 -- # read -r var val 00:08:45.226 19:23:10 -- accel/accel.sh@20 -- # val= 00:08:45.226 19:23:10 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.226 19:23:10 -- accel/accel.sh@19 -- # IFS=: 00:08:45.226 19:23:10 -- accel/accel.sh@19 -- # read -r var val 00:08:45.226 19:23:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:45.226 19:23:10 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:45.226 19:23:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:45.226 00:08:45.226 real 0m2.726s 00:08:45.226 user 0m2.475s 00:08:45.226 sys 0m0.163s 00:08:45.226 19:23:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:45.226 19:23:10 -- common/autotest_common.sh@10 -- # set +x 00:08:45.226 ************************************ 00:08:45.226 END TEST accel_comp 00:08:45.226 ************************************ 00:08:45.226 19:23:10 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:45.226 19:23:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:45.226 19:23:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.226 19:23:10 -- common/autotest_common.sh@10 -- # set +x 00:08:45.226 ************************************ 00:08:45.226 START TEST accel_decomp 00:08:45.226 ************************************ 00:08:45.226 19:23:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:45.226 19:23:10 -- accel/accel.sh@16 -- # local accel_opc 00:08:45.226 19:23:10 -- accel/accel.sh@17 -- # local accel_module 00:08:45.226 19:23:10 -- accel/accel.sh@19 -- # IFS=: 00:08:45.226 19:23:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:45.226 19:23:10 -- accel/accel.sh@19 -- # read -r var val 00:08:45.226 19:23:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:45.226 19:23:10 -- accel/accel.sh@12 -- # build_accel_config 00:08:45.226 19:23:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:45.226 19:23:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:45.226 19:23:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:45.226 19:23:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:45.226 19:23:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:45.226 19:23:10 -- accel/accel.sh@40 -- # local IFS=, 00:08:45.226 19:23:10 -- accel/accel.sh@41 -- # jq -r . 00:08:45.483 [2024-04-24 19:23:10.903091] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:45.483 [2024-04-24 19:23:10.903278] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65824 ] 00:08:45.483 [2024-04-24 19:23:11.067290] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.741 [2024-04-24 19:23:11.326356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val= 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val= 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val= 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val=0x1 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val= 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val= 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val=decompress 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val= 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val=software 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@22 -- # accel_module=software 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val=32 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val=32 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val=1 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val=Yes 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val= 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:45.999 19:23:11 -- accel/accel.sh@20 -- # val= 00:08:45.999 19:23:11 -- accel/accel.sh@21 -- # case "$var" in 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # IFS=: 00:08:45.999 19:23:11 -- accel/accel.sh@19 -- # read -r var val 00:08:48.532 19:23:13 -- accel/accel.sh@20 -- # val= 00:08:48.532 19:23:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.532 19:23:13 -- accel/accel.sh@19 -- # IFS=: 00:08:48.532 19:23:13 -- accel/accel.sh@19 -- # read -r var val 00:08:48.532 19:23:13 -- accel/accel.sh@20 -- # val= 00:08:48.532 19:23:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.532 19:23:13 -- accel/accel.sh@19 -- # IFS=: 00:08:48.532 19:23:13 -- accel/accel.sh@19 -- # read -r var val 00:08:48.532 19:23:13 -- accel/accel.sh@20 -- # val= 00:08:48.532 19:23:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.532 19:23:13 -- accel/accel.sh@19 -- # IFS=: 00:08:48.532 19:23:13 -- accel/accel.sh@19 -- # read -r var val 00:08:48.532 19:23:13 -- accel/accel.sh@20 -- # val= 00:08:48.532 19:23:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.532 19:23:13 -- accel/accel.sh@19 -- # IFS=: 00:08:48.532 19:23:13 -- accel/accel.sh@19 -- # read -r var val 00:08:48.532 19:23:13 -- accel/accel.sh@20 -- # val= 00:08:48.532 19:23:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.532 19:23:13 -- accel/accel.sh@19 -- # IFS=: 00:08:48.532 19:23:13 -- accel/accel.sh@19 -- # read -r var val 00:08:48.532 19:23:13 -- accel/accel.sh@20 -- # val= 00:08:48.532 19:23:13 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.532 19:23:13 -- accel/accel.sh@19 -- # IFS=: 00:08:48.532 19:23:13 -- accel/accel.sh@19 -- # read -r var val 00:08:48.532 19:23:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:48.532 19:23:13 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:48.532 ************************************ 00:08:48.532 END TEST accel_decomp 00:08:48.532 ************************************ 00:08:48.532 19:23:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:48.532 00:08:48.532 real 0m2.781s 00:08:48.532 user 0m2.518s 00:08:48.532 sys 0m0.171s 00:08:48.532 19:23:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:48.532 19:23:13 -- common/autotest_common.sh@10 -- # set +x 00:08:48.532 19:23:13 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:48.532 19:23:13 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:48.532 19:23:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.532 19:23:13 -- common/autotest_common.sh@10 -- # set +x 00:08:48.532 ************************************ 00:08:48.532 START TEST accel_decmop_full 00:08:48.532 ************************************ 00:08:48.532 19:23:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:48.532 19:23:13 -- accel/accel.sh@16 -- # local accel_opc 00:08:48.532 19:23:13 -- accel/accel.sh@17 -- # local accel_module 00:08:48.532 19:23:13 -- accel/accel.sh@19 -- # IFS=: 00:08:48.532 19:23:13 -- accel/accel.sh@19 -- # read -r var val 00:08:48.532 19:23:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:48.532 19:23:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:48.532 19:23:13 -- accel/accel.sh@12 -- # build_accel_config 00:08:48.532 19:23:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:48.532 19:23:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:48.532 19:23:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:48.532 19:23:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:48.532 19:23:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:48.532 19:23:13 -- accel/accel.sh@40 -- # local IFS=, 00:08:48.532 19:23:13 -- accel/accel.sh@41 -- # jq -r . 00:08:48.532 [2024-04-24 19:23:13.800287] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:48.532 [2024-04-24 19:23:13.800466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65880 ] 00:08:48.532 [2024-04-24 19:23:13.963007] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.532 [2024-04-24 19:23:14.202701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.804 19:23:14 -- accel/accel.sh@20 -- # val= 00:08:48.804 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.804 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.804 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.804 19:23:14 -- accel/accel.sh@20 -- # val= 00:08:48.804 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.804 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.804 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.804 19:23:14 -- accel/accel.sh@20 -- # val= 00:08:48.804 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.804 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.804 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.804 19:23:14 -- accel/accel.sh@20 -- # val=0x1 00:08:48.805 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.805 19:23:14 -- accel/accel.sh@20 -- # val= 00:08:48.805 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.805 19:23:14 -- accel/accel.sh@20 -- # val= 00:08:48.805 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.805 19:23:14 -- accel/accel.sh@20 -- # val=decompress 00:08:48.805 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.805 19:23:14 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.805 19:23:14 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:48.805 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.805 19:23:14 -- accel/accel.sh@20 -- # val= 00:08:48.805 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.805 19:23:14 -- accel/accel.sh@20 -- # val=software 00:08:48.805 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.805 19:23:14 -- accel/accel.sh@22 -- # accel_module=software 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.805 19:23:14 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:48.805 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.805 19:23:14 -- accel/accel.sh@20 -- # val=32 00:08:48.805 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.805 19:23:14 -- accel/accel.sh@20 -- # val=32 00:08:48.805 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.805 19:23:14 -- accel/accel.sh@20 -- # val=1 00:08:48.805 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.805 19:23:14 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:48.805 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:48.805 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:48.805 19:23:14 -- accel/accel.sh@20 -- # val=Yes 00:08:49.063 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.063 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:49.063 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:49.063 19:23:14 -- accel/accel.sh@20 -- # val= 00:08:49.063 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.063 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:49.063 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:49.063 19:23:14 -- accel/accel.sh@20 -- # val= 00:08:49.063 19:23:14 -- accel/accel.sh@21 -- # case "$var" in 00:08:49.063 19:23:14 -- accel/accel.sh@19 -- # IFS=: 00:08:49.063 19:23:14 -- accel/accel.sh@19 -- # read -r var val 00:08:50.965 19:23:16 -- accel/accel.sh@20 -- # val= 00:08:50.965 19:23:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.965 19:23:16 -- accel/accel.sh@19 -- # IFS=: 00:08:50.965 19:23:16 -- accel/accel.sh@19 -- # read -r var val 00:08:50.965 19:23:16 -- accel/accel.sh@20 -- # val= 00:08:50.965 19:23:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.965 19:23:16 -- accel/accel.sh@19 -- # IFS=: 00:08:50.965 19:23:16 -- accel/accel.sh@19 -- # read -r var val 00:08:50.965 19:23:16 -- accel/accel.sh@20 -- # val= 00:08:50.966 19:23:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.966 19:23:16 -- accel/accel.sh@19 -- # IFS=: 00:08:50.966 19:23:16 -- accel/accel.sh@19 -- # read -r var val 00:08:50.966 19:23:16 -- accel/accel.sh@20 -- # val= 00:08:50.966 19:23:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.966 19:23:16 -- accel/accel.sh@19 -- # IFS=: 00:08:50.966 19:23:16 -- accel/accel.sh@19 -- # read -r var val 00:08:50.966 19:23:16 -- accel/accel.sh@20 -- # val= 00:08:50.966 19:23:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.966 19:23:16 -- accel/accel.sh@19 -- # IFS=: 00:08:50.966 19:23:16 -- accel/accel.sh@19 -- # read -r var val 00:08:50.966 19:23:16 -- accel/accel.sh@20 -- # val= 00:08:50.966 19:23:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:50.966 19:23:16 -- accel/accel.sh@19 -- # IFS=: 00:08:50.966 19:23:16 -- accel/accel.sh@19 -- # read -r var val 00:08:50.966 19:23:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:50.966 19:23:16 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:50.966 19:23:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:50.966 00:08:50.966 real 0m2.761s 00:08:50.966 user 0m2.509s 00:08:50.966 sys 0m0.162s 00:08:50.966 19:23:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:50.966 19:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:50.966 ************************************ 00:08:50.966 END TEST accel_decmop_full 00:08:50.966 ************************************ 00:08:50.966 19:23:16 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:50.966 19:23:16 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:50.966 19:23:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.966 19:23:16 -- common/autotest_common.sh@10 -- # set +x 00:08:50.966 ************************************ 00:08:50.966 START TEST accel_decomp_mcore 00:08:50.966 ************************************ 00:08:50.966 19:23:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:50.966 19:23:16 -- accel/accel.sh@16 -- # local accel_opc 00:08:50.966 19:23:16 -- accel/accel.sh@17 -- # local accel_module 00:08:51.224 19:23:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:51.224 19:23:16 -- accel/accel.sh@19 -- # IFS=: 00:08:51.224 19:23:16 -- accel/accel.sh@19 -- # read -r var val 00:08:51.224 19:23:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:51.224 19:23:16 -- accel/accel.sh@12 -- # build_accel_config 00:08:51.224 19:23:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:51.224 19:23:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:51.224 19:23:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:51.224 19:23:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:51.224 19:23:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:51.224 19:23:16 -- accel/accel.sh@40 -- # local IFS=, 00:08:51.224 19:23:16 -- accel/accel.sh@41 -- # jq -r . 00:08:51.224 [2024-04-24 19:23:16.689674] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:51.224 [2024-04-24 19:23:16.689988] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65936 ] 00:08:51.224 [2024-04-24 19:23:16.862877] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.483 [2024-04-24 19:23:17.127120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.483 [2024-04-24 19:23:17.127399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.483 [2024-04-24 19:23:17.127298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.483 [2024-04-24 19:23:17.127432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.050 19:23:17 -- accel/accel.sh@20 -- # val= 00:08:52.050 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.050 19:23:17 -- accel/accel.sh@20 -- # val= 00:08:52.050 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.050 19:23:17 -- accel/accel.sh@20 -- # val= 00:08:52.050 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.050 19:23:17 -- accel/accel.sh@20 -- # val=0xf 00:08:52.050 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.050 19:23:17 -- accel/accel.sh@20 -- # val= 00:08:52.050 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.050 19:23:17 -- accel/accel.sh@20 -- # val= 00:08:52.050 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.050 19:23:17 -- accel/accel.sh@20 -- # val=decompress 00:08:52.050 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.050 19:23:17 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.050 19:23:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:52.050 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.050 19:23:17 -- accel/accel.sh@20 -- # val= 00:08:52.050 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.050 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.050 19:23:17 -- accel/accel.sh@20 -- # val=software 00:08:52.051 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.051 19:23:17 -- accel/accel.sh@22 -- # accel_module=software 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.051 19:23:17 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:52.051 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.051 19:23:17 -- accel/accel.sh@20 -- # val=32 00:08:52.051 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.051 19:23:17 -- accel/accel.sh@20 -- # val=32 00:08:52.051 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.051 19:23:17 -- accel/accel.sh@20 -- # val=1 00:08:52.051 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.051 19:23:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:52.051 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.051 19:23:17 -- accel/accel.sh@20 -- # val=Yes 00:08:52.051 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.051 19:23:17 -- accel/accel.sh@20 -- # val= 00:08:52.051 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:52.051 19:23:17 -- accel/accel.sh@20 -- # val= 00:08:52.051 19:23:17 -- accel/accel.sh@21 -- # case "$var" in 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # IFS=: 00:08:52.051 19:23:17 -- accel/accel.sh@19 -- # read -r var val 00:08:53.953 19:23:19 -- accel/accel.sh@20 -- # val= 00:08:53.953 19:23:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # IFS=: 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # read -r var val 00:08:53.953 19:23:19 -- accel/accel.sh@20 -- # val= 00:08:53.953 19:23:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # IFS=: 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # read -r var val 00:08:53.953 19:23:19 -- accel/accel.sh@20 -- # val= 00:08:53.953 19:23:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # IFS=: 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # read -r var val 00:08:53.953 19:23:19 -- accel/accel.sh@20 -- # val= 00:08:53.953 19:23:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # IFS=: 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # read -r var val 00:08:53.953 19:23:19 -- accel/accel.sh@20 -- # val= 00:08:53.953 19:23:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # IFS=: 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # read -r var val 00:08:53.953 19:23:19 -- accel/accel.sh@20 -- # val= 00:08:53.953 19:23:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # IFS=: 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # read -r var val 00:08:53.953 19:23:19 -- accel/accel.sh@20 -- # val= 00:08:53.953 19:23:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # IFS=: 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # read -r var val 00:08:53.953 19:23:19 -- accel/accel.sh@20 -- # val= 00:08:53.953 19:23:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # IFS=: 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # read -r var val 00:08:53.953 19:23:19 -- accel/accel.sh@20 -- # val= 00:08:53.953 19:23:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # IFS=: 00:08:53.953 19:23:19 -- accel/accel.sh@19 -- # read -r var val 00:08:53.953 19:23:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:53.953 19:23:19 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:53.953 19:23:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:53.953 00:08:53.953 real 0m2.867s 00:08:53.953 user 0m8.291s 00:08:53.953 sys 0m0.198s 00:08:53.953 ************************************ 00:08:53.953 END TEST accel_decomp_mcore 00:08:53.953 ************************************ 00:08:53.953 19:23:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:53.953 19:23:19 -- common/autotest_common.sh@10 -- # set +x 00:08:53.953 19:23:19 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:53.953 19:23:19 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:53.953 19:23:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:53.953 19:23:19 -- common/autotest_common.sh@10 -- # set +x 00:08:54.212 ************************************ 00:08:54.212 START TEST accel_decomp_full_mcore 00:08:54.212 ************************************ 00:08:54.212 19:23:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.212 19:23:19 -- accel/accel.sh@16 -- # local accel_opc 00:08:54.212 19:23:19 -- accel/accel.sh@17 -- # local accel_module 00:08:54.212 19:23:19 -- accel/accel.sh@19 -- # IFS=: 00:08:54.212 19:23:19 -- accel/accel.sh@19 -- # read -r var val 00:08:54.212 19:23:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.212 19:23:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:54.212 19:23:19 -- accel/accel.sh@12 -- # build_accel_config 00:08:54.212 19:23:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:54.212 19:23:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:54.212 19:23:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:54.212 19:23:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:54.212 19:23:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:54.212 19:23:19 -- accel/accel.sh@40 -- # local IFS=, 00:08:54.212 19:23:19 -- accel/accel.sh@41 -- # jq -r . 00:08:54.212 [2024-04-24 19:23:19.706264] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:54.212 [2024-04-24 19:23:19.706406] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65994 ] 00:08:54.212 [2024-04-24 19:23:19.876489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:54.471 [2024-04-24 19:23:20.143761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.471 [2024-04-24 19:23:20.143880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.471 [2024-04-24 19:23:20.144022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.471 [2024-04-24 19:23:20.144057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val= 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val= 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val= 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val=0xf 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val= 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val= 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val=decompress 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val= 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val=software 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@22 -- # accel_module=software 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val=32 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val=32 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val=1 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val=Yes 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val= 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:55.042 19:23:20 -- accel/accel.sh@20 -- # val= 00:08:55.042 19:23:20 -- accel/accel.sh@21 -- # case "$var" in 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # IFS=: 00:08:55.042 19:23:20 -- accel/accel.sh@19 -- # read -r var val 00:08:56.946 19:23:22 -- accel/accel.sh@20 -- # val= 00:08:56.946 19:23:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # IFS=: 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # read -r var val 00:08:56.946 19:23:22 -- accel/accel.sh@20 -- # val= 00:08:56.946 19:23:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # IFS=: 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # read -r var val 00:08:56.946 19:23:22 -- accel/accel.sh@20 -- # val= 00:08:56.946 19:23:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # IFS=: 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # read -r var val 00:08:56.946 19:23:22 -- accel/accel.sh@20 -- # val= 00:08:56.946 19:23:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # IFS=: 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # read -r var val 00:08:56.946 19:23:22 -- accel/accel.sh@20 -- # val= 00:08:56.946 19:23:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # IFS=: 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # read -r var val 00:08:56.946 19:23:22 -- accel/accel.sh@20 -- # val= 00:08:56.946 19:23:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # IFS=: 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # read -r var val 00:08:56.946 19:23:22 -- accel/accel.sh@20 -- # val= 00:08:56.946 19:23:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # IFS=: 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # read -r var val 00:08:56.946 19:23:22 -- accel/accel.sh@20 -- # val= 00:08:56.946 19:23:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # IFS=: 00:08:56.946 19:23:22 -- accel/accel.sh@19 -- # read -r var val 00:08:56.946 19:23:22 -- accel/accel.sh@20 -- # val= 00:08:56.946 19:23:22 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.947 19:23:22 -- accel/accel.sh@19 -- # IFS=: 00:08:56.947 19:23:22 -- accel/accel.sh@19 -- # read -r var val 00:08:56.947 19:23:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:56.947 19:23:22 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:56.947 19:23:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:56.947 00:08:56.947 real 0m2.905s 00:08:56.947 user 0m8.402s 00:08:56.947 sys 0m0.216s 00:08:56.947 19:23:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:56.947 19:23:22 -- common/autotest_common.sh@10 -- # set +x 00:08:56.947 ************************************ 00:08:56.947 END TEST accel_decomp_full_mcore 00:08:56.947 ************************************ 00:08:56.947 19:23:22 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:56.947 19:23:22 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:56.947 19:23:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.947 19:23:22 -- common/autotest_common.sh@10 -- # set +x 00:08:57.206 ************************************ 00:08:57.206 START TEST accel_decomp_mthread 00:08:57.206 ************************************ 00:08:57.206 19:23:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:57.206 19:23:22 -- accel/accel.sh@16 -- # local accel_opc 00:08:57.206 19:23:22 -- accel/accel.sh@17 -- # local accel_module 00:08:57.206 19:23:22 -- accel/accel.sh@19 -- # IFS=: 00:08:57.206 19:23:22 -- accel/accel.sh@19 -- # read -r var val 00:08:57.206 19:23:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:57.206 19:23:22 -- accel/accel.sh@12 -- # build_accel_config 00:08:57.206 19:23:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:57.206 19:23:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:57.206 19:23:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:57.206 19:23:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:57.206 19:23:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:57.206 19:23:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:57.206 19:23:22 -- accel/accel.sh@40 -- # local IFS=, 00:08:57.206 19:23:22 -- accel/accel.sh@41 -- # jq -r . 00:08:57.206 [2024-04-24 19:23:22.760704] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:08:57.206 [2024-04-24 19:23:22.760962] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66049 ] 00:08:57.468 [2024-04-24 19:23:22.935370] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.735 [2024-04-24 19:23:23.174256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val= 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val= 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val= 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val=0x1 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val= 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val= 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val=decompress 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val= 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val=software 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@22 -- # accel_module=software 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val=32 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val=32 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val=2 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val=Yes 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.027 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.027 19:23:23 -- accel/accel.sh@20 -- # val= 00:08:58.027 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.028 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.028 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:58.028 19:23:23 -- accel/accel.sh@20 -- # val= 00:08:58.028 19:23:23 -- accel/accel.sh@21 -- # case "$var" in 00:08:58.028 19:23:23 -- accel/accel.sh@19 -- # IFS=: 00:08:58.028 19:23:23 -- accel/accel.sh@19 -- # read -r var val 00:08:59.934 19:23:25 -- accel/accel.sh@20 -- # val= 00:08:59.934 19:23:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # IFS=: 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # read -r var val 00:08:59.934 19:23:25 -- accel/accel.sh@20 -- # val= 00:08:59.934 19:23:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # IFS=: 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # read -r var val 00:08:59.934 19:23:25 -- accel/accel.sh@20 -- # val= 00:08:59.934 19:23:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # IFS=: 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # read -r var val 00:08:59.934 19:23:25 -- accel/accel.sh@20 -- # val= 00:08:59.934 19:23:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # IFS=: 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # read -r var val 00:08:59.934 19:23:25 -- accel/accel.sh@20 -- # val= 00:08:59.934 19:23:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # IFS=: 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # read -r var val 00:08:59.934 19:23:25 -- accel/accel.sh@20 -- # val= 00:08:59.934 19:23:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # IFS=: 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # read -r var val 00:08:59.934 19:23:25 -- accel/accel.sh@20 -- # val= 00:08:59.934 19:23:25 -- accel/accel.sh@21 -- # case "$var" in 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # IFS=: 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # read -r var val 00:08:59.934 19:23:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:59.934 19:23:25 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:59.934 ************************************ 00:08:59.934 END TEST accel_decomp_mthread 00:08:59.934 ************************************ 00:08:59.934 19:23:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:59.934 00:08:59.934 real 0m2.763s 00:08:59.934 user 0m2.509s 00:08:59.934 sys 0m0.169s 00:08:59.934 19:23:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:59.934 19:23:25 -- common/autotest_common.sh@10 -- # set +x 00:08:59.934 19:23:25 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:59.934 19:23:25 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:59.934 19:23:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.934 19:23:25 -- common/autotest_common.sh@10 -- # set +x 00:08:59.934 ************************************ 00:08:59.934 START TEST accel_deomp_full_mthread 00:08:59.934 ************************************ 00:08:59.934 19:23:25 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:59.934 19:23:25 -- accel/accel.sh@16 -- # local accel_opc 00:08:59.934 19:23:25 -- accel/accel.sh@17 -- # local accel_module 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # IFS=: 00:08:59.934 19:23:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:59.934 19:23:25 -- accel/accel.sh@19 -- # read -r var val 00:08:59.934 19:23:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:59.934 19:23:25 -- accel/accel.sh@12 -- # build_accel_config 00:08:59.934 19:23:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:59.934 19:23:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:59.934 19:23:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:59.934 19:23:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:59.935 19:23:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:59.935 19:23:25 -- accel/accel.sh@40 -- # local IFS=, 00:08:59.935 19:23:25 -- accel/accel.sh@41 -- # jq -r . 00:09:00.193 [2024-04-24 19:23:25.653771] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:00.193 [2024-04-24 19:23:25.653878] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66105 ] 00:09:00.193 [2024-04-24 19:23:25.816104] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.453 [2024-04-24 19:23:26.064237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val= 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val= 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val= 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val=0x1 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val= 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val= 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val=decompress 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val= 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val=software 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@22 -- # accel_module=software 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val=32 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val=32 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val=2 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val=Yes 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val= 00:09:00.711 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.711 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:00.711 19:23:26 -- accel/accel.sh@20 -- # val= 00:09:00.712 19:23:26 -- accel/accel.sh@21 -- # case "$var" in 00:09:00.712 19:23:26 -- accel/accel.sh@19 -- # IFS=: 00:09:00.712 19:23:26 -- accel/accel.sh@19 -- # read -r var val 00:09:03.249 19:23:28 -- accel/accel.sh@20 -- # val= 00:09:03.249 19:23:28 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.249 19:23:28 -- accel/accel.sh@19 -- # IFS=: 00:09:03.249 19:23:28 -- accel/accel.sh@19 -- # read -r var val 00:09:03.249 19:23:28 -- accel/accel.sh@20 -- # val= 00:09:03.249 19:23:28 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.249 19:23:28 -- accel/accel.sh@19 -- # IFS=: 00:09:03.249 19:23:28 -- accel/accel.sh@19 -- # read -r var val 00:09:03.249 19:23:28 -- accel/accel.sh@20 -- # val= 00:09:03.249 19:23:28 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.249 19:23:28 -- accel/accel.sh@19 -- # IFS=: 00:09:03.249 19:23:28 -- accel/accel.sh@19 -- # read -r var val 00:09:03.249 19:23:28 -- accel/accel.sh@20 -- # val= 00:09:03.249 19:23:28 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.249 19:23:28 -- accel/accel.sh@19 -- # IFS=: 00:09:03.249 19:23:28 -- accel/accel.sh@19 -- # read -r var val 00:09:03.249 19:23:28 -- accel/accel.sh@20 -- # val= 00:09:03.249 19:23:28 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.249 19:23:28 -- accel/accel.sh@19 -- # IFS=: 00:09:03.249 19:23:28 -- accel/accel.sh@19 -- # read -r var val 00:09:03.249 19:23:28 -- accel/accel.sh@20 -- # val= 00:09:03.249 19:23:28 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.249 19:23:28 -- accel/accel.sh@19 -- # IFS=: 00:09:03.249 19:23:28 -- accel/accel.sh@19 -- # read -r var val 00:09:03.249 19:23:28 -- accel/accel.sh@20 -- # val= 00:09:03.249 19:23:28 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.249 19:23:28 -- accel/accel.sh@19 -- # IFS=: 00:09:03.249 19:23:28 -- accel/accel.sh@19 -- # read -r var val 00:09:03.249 19:23:28 -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:03.249 19:23:28 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:03.249 19:23:28 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:03.249 00:09:03.249 real 0m2.789s 00:09:03.249 user 0m2.526s 00:09:03.249 sys 0m0.176s 00:09:03.249 19:23:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:03.249 19:23:28 -- common/autotest_common.sh@10 -- # set +x 00:09:03.249 ************************************ 00:09:03.249 END TEST accel_deomp_full_mthread 00:09:03.249 ************************************ 00:09:03.249 19:23:28 -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:03.249 19:23:28 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:03.249 19:23:28 -- accel/accel.sh@137 -- # build_accel_config 00:09:03.249 19:23:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:03.249 19:23:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:03.249 19:23:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:09:03.249 19:23:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:03.249 19:23:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:03.249 19:23:28 -- common/autotest_common.sh@10 -- # set +x 00:09:03.249 19:23:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:03.249 19:23:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:03.249 19:23:28 -- accel/accel.sh@40 -- # local IFS=, 00:09:03.249 19:23:28 -- accel/accel.sh@41 -- # jq -r . 00:09:03.249 ************************************ 00:09:03.249 START TEST accel_dif_functional_tests 00:09:03.249 ************************************ 00:09:03.249 19:23:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:03.249 [2024-04-24 19:23:28.599628] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:03.249 [2024-04-24 19:23:28.599768] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66164 ] 00:09:03.249 [2024-04-24 19:23:28.748596] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:03.508 [2024-04-24 19:23:29.006166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.508 [2024-04-24 19:23:29.006300] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.508 [2024-04-24 19:23:29.006335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.766 00:09:03.766 00:09:03.766 CUnit - A unit testing framework for C - Version 2.1-3 00:09:03.766 http://cunit.sourceforge.net/ 00:09:03.766 00:09:03.766 00:09:03.766 Suite: accel_dif 00:09:03.766 Test: verify: DIF generated, GUARD check ...passed 00:09:03.766 Test: verify: DIF generated, APPTAG check ...passed 00:09:03.766 Test: verify: DIF generated, REFTAG check ...passed 00:09:03.766 Test: verify: DIF not generated, GUARD check ...passed 00:09:03.766 Test: verify: DIF not generated, APPTAG check ...[2024-04-24 19:23:29.385165] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:03.766 [2024-04-24 19:23:29.385236] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:09:03.766 passed 00:09:03.766 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 19:23:29.385302] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:03.766 [2024-04-24 19:23:29.385350] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:09:03.766 [2024-04-24 19:23:29.385388] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:03.766 passed 00:09:03.766 Test: verify: APPTAG correct, APPTAG check ...[2024-04-24 19:23:29.385435] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:09:03.766 passed 00:09:03.766 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-24 19:23:29.385532] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:09:03.766 passed 00:09:03.766 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:09:03.766 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:09:03.766 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:09:03.766 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:09:03.766 Test: generate copy: DIF generated, GUARD check ...[2024-04-24 19:23:29.385821] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:09:03.766 passed 00:09:03.766 Test: generate copy: DIF generated, APTTAG check ...passed 00:09:03.766 Test: generate copy: DIF generated, REFTAG check ...passed 00:09:03.766 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:09:03.766 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:09:03.766 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:09:03.766 Test: generate copy: iovecs-len validate ...passed 00:09:03.766 Test: generate copy: buffer alignment validate ...[2024-04-24 19:23:29.386261] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:09:03.766 passed 00:09:03.766 00:09:03.767 Run Summary: Type Total Ran Passed Failed Inactive 00:09:03.767 suites 1 1 n/a 0 0 00:09:03.767 tests 20 20 20 0 0 00:09:03.767 asserts 204 204 204 0 n/a 00:09:03.767 00:09:03.767 Elapsed time = 0.003 seconds 00:09:05.144 00:09:05.144 real 0m2.173s 00:09:05.144 user 0m4.294s 00:09:05.144 sys 0m0.221s 00:09:05.144 ************************************ 00:09:05.144 END TEST accel_dif_functional_tests 00:09:05.144 ************************************ 00:09:05.144 19:23:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:05.144 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:09:05.144 00:09:05.144 real 1m9.269s 00:09:05.144 user 1m14.883s 00:09:05.144 sys 0m6.533s 00:09:05.144 19:23:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:05.144 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:09:05.144 ************************************ 00:09:05.144 END TEST accel 00:09:05.144 ************************************ 00:09:05.145 19:23:30 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:05.145 19:23:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:05.145 19:23:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.145 19:23:30 -- common/autotest_common.sh@10 -- # set +x 00:09:05.404 ************************************ 00:09:05.404 START TEST accel_rpc 00:09:05.404 ************************************ 00:09:05.404 19:23:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:09:05.404 * Looking for test storage... 00:09:05.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:05.404 19:23:31 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:05.404 19:23:31 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66252 00:09:05.404 19:23:31 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:09:05.404 19:23:31 -- accel/accel_rpc.sh@15 -- # waitforlisten 66252 00:09:05.404 19:23:31 -- common/autotest_common.sh@817 -- # '[' -z 66252 ']' 00:09:05.404 19:23:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.404 19:23:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:05.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.404 19:23:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.404 19:23:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:05.404 19:23:31 -- common/autotest_common.sh@10 -- # set +x 00:09:05.663 [2024-04-24 19:23:31.115687] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:05.663 [2024-04-24 19:23:31.115797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66252 ] 00:09:05.663 [2024-04-24 19:23:31.276667] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.921 [2024-04-24 19:23:31.520748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.489 19:23:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:06.489 19:23:31 -- common/autotest_common.sh@850 -- # return 0 00:09:06.489 19:23:31 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:09:06.489 19:23:31 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:09:06.489 19:23:31 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:09:06.489 19:23:31 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:09:06.489 19:23:31 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:09:06.489 19:23:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:06.489 19:23:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:06.489 19:23:31 -- common/autotest_common.sh@10 -- # set +x 00:09:06.489 ************************************ 00:09:06.489 START TEST accel_assign_opcode 00:09:06.489 ************************************ 00:09:06.489 19:23:32 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:09:06.489 19:23:32 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:09:06.489 19:23:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.489 19:23:32 -- common/autotest_common.sh@10 -- # set +x 00:09:06.489 [2024-04-24 19:23:32.012551] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:09:06.489 19:23:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:06.489 19:23:32 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:09:06.489 19:23:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.489 19:23:32 -- common/autotest_common.sh@10 -- # set +x 00:09:06.489 [2024-04-24 19:23:32.024465] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:09:06.489 19:23:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:06.489 19:23:32 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:09:06.489 19:23:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:06.489 19:23:32 -- common/autotest_common.sh@10 -- # set +x 00:09:07.428 19:23:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.428 19:23:32 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:09:07.428 19:23:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.428 19:23:32 -- common/autotest_common.sh@10 -- # set +x 00:09:07.428 19:23:32 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:09:07.428 19:23:32 -- accel/accel_rpc.sh@42 -- # grep software 00:09:07.428 19:23:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.428 software 00:09:07.428 00:09:07.428 real 0m0.991s 00:09:07.428 user 0m0.052s 00:09:07.428 sys 0m0.013s 00:09:07.428 19:23:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:07.428 19:23:32 -- common/autotest_common.sh@10 -- # set +x 00:09:07.428 ************************************ 00:09:07.428 END TEST accel_assign_opcode 00:09:07.428 ************************************ 00:09:07.428 19:23:33 -- accel/accel_rpc.sh@55 -- # killprocess 66252 00:09:07.428 19:23:33 -- common/autotest_common.sh@936 -- # '[' -z 66252 ']' 00:09:07.428 19:23:33 -- common/autotest_common.sh@940 -- # kill -0 66252 00:09:07.428 19:23:33 -- common/autotest_common.sh@941 -- # uname 00:09:07.428 19:23:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:07.428 19:23:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66252 00:09:07.428 19:23:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:07.428 19:23:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:07.428 killing process with pid 66252 00:09:07.428 19:23:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66252' 00:09:07.428 19:23:33 -- common/autotest_common.sh@955 -- # kill 66252 00:09:07.428 19:23:33 -- common/autotest_common.sh@960 -- # wait 66252 00:09:09.963 00:09:09.963 real 0m4.672s 00:09:09.963 user 0m4.575s 00:09:09.963 sys 0m0.589s 00:09:09.963 19:23:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:09.963 19:23:35 -- common/autotest_common.sh@10 -- # set +x 00:09:09.963 ************************************ 00:09:09.963 END TEST accel_rpc 00:09:09.963 ************************************ 00:09:09.963 19:23:35 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:09.963 19:23:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:09.963 19:23:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:09.963 19:23:35 -- common/autotest_common.sh@10 -- # set +x 00:09:10.222 ************************************ 00:09:10.222 START TEST app_cmdline 00:09:10.222 ************************************ 00:09:10.222 19:23:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:10.222 * Looking for test storage... 00:09:10.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:10.222 19:23:35 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:10.222 19:23:35 -- app/cmdline.sh@17 -- # spdk_tgt_pid=66383 00:09:10.222 19:23:35 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:10.222 19:23:35 -- app/cmdline.sh@18 -- # waitforlisten 66383 00:09:10.222 19:23:35 -- common/autotest_common.sh@817 -- # '[' -z 66383 ']' 00:09:10.222 19:23:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.222 19:23:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:10.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.222 19:23:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.222 19:23:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:10.222 19:23:35 -- common/autotest_common.sh@10 -- # set +x 00:09:10.481 [2024-04-24 19:23:35.930620] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:10.481 [2024-04-24 19:23:35.930753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66383 ] 00:09:10.481 [2024-04-24 19:23:36.096034] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.740 [2024-04-24 19:23:36.340432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.676 19:23:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:11.676 19:23:37 -- common/autotest_common.sh@850 -- # return 0 00:09:11.676 19:23:37 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:11.935 { 00:09:11.935 "version": "SPDK v24.05-pre git sha1 dd57ed3e8", 00:09:11.935 "fields": { 00:09:11.935 "major": 24, 00:09:11.935 "minor": 5, 00:09:11.935 "patch": 0, 00:09:11.935 "suffix": "-pre", 00:09:11.935 "commit": "dd57ed3e8" 00:09:11.935 } 00:09:11.935 } 00:09:11.935 19:23:37 -- app/cmdline.sh@22 -- # expected_methods=() 00:09:11.935 19:23:37 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:11.935 19:23:37 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:11.935 19:23:37 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:11.935 19:23:37 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:11.935 19:23:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.935 19:23:37 -- common/autotest_common.sh@10 -- # set +x 00:09:11.935 19:23:37 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:11.935 19:23:37 -- app/cmdline.sh@26 -- # sort 00:09:11.935 19:23:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.935 19:23:37 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:11.935 19:23:37 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:11.935 19:23:37 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:11.935 19:23:37 -- common/autotest_common.sh@638 -- # local es=0 00:09:11.935 19:23:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:11.935 19:23:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.935 19:23:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:11.935 19:23:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.935 19:23:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:11.935 19:23:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.935 19:23:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:11.935 19:23:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.935 19:23:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:11.935 19:23:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:12.194 request: 00:09:12.194 { 00:09:12.194 "method": "env_dpdk_get_mem_stats", 00:09:12.194 "req_id": 1 00:09:12.194 } 00:09:12.194 Got JSON-RPC error response 00:09:12.194 response: 00:09:12.194 { 00:09:12.194 "code": -32601, 00:09:12.194 "message": "Method not found" 00:09:12.194 } 00:09:12.194 19:23:37 -- common/autotest_common.sh@641 -- # es=1 00:09:12.194 19:23:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:12.194 19:23:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:12.194 19:23:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:12.194 19:23:37 -- app/cmdline.sh@1 -- # killprocess 66383 00:09:12.194 19:23:37 -- common/autotest_common.sh@936 -- # '[' -z 66383 ']' 00:09:12.194 19:23:37 -- common/autotest_common.sh@940 -- # kill -0 66383 00:09:12.194 19:23:37 -- common/autotest_common.sh@941 -- # uname 00:09:12.194 19:23:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:12.194 19:23:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66383 00:09:12.194 killing process with pid 66383 00:09:12.194 19:23:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:12.194 19:23:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:12.194 19:23:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66383' 00:09:12.194 19:23:37 -- common/autotest_common.sh@955 -- # kill 66383 00:09:12.194 19:23:37 -- common/autotest_common.sh@960 -- # wait 66383 00:09:14.728 00:09:14.728 real 0m4.396s 00:09:14.728 user 0m4.583s 00:09:14.728 sys 0m0.537s 00:09:14.728 19:23:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:14.728 19:23:40 -- common/autotest_common.sh@10 -- # set +x 00:09:14.728 ************************************ 00:09:14.728 END TEST app_cmdline 00:09:14.728 ************************************ 00:09:14.728 19:23:40 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:14.728 19:23:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:14.728 19:23:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.728 19:23:40 -- common/autotest_common.sh@10 -- # set +x 00:09:14.728 ************************************ 00:09:14.728 START TEST version 00:09:14.728 ************************************ 00:09:14.728 19:23:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:14.728 * Looking for test storage... 00:09:14.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:14.728 19:23:40 -- app/version.sh@17 -- # get_header_version major 00:09:14.728 19:23:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:14.728 19:23:40 -- app/version.sh@14 -- # cut -f2 00:09:14.728 19:23:40 -- app/version.sh@14 -- # tr -d '"' 00:09:14.728 19:23:40 -- app/version.sh@17 -- # major=24 00:09:14.728 19:23:40 -- app/version.sh@18 -- # get_header_version minor 00:09:14.728 19:23:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:14.728 19:23:40 -- app/version.sh@14 -- # cut -f2 00:09:14.728 19:23:40 -- app/version.sh@14 -- # tr -d '"' 00:09:14.728 19:23:40 -- app/version.sh@18 -- # minor=5 00:09:14.728 19:23:40 -- app/version.sh@19 -- # get_header_version patch 00:09:14.728 19:23:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:14.728 19:23:40 -- app/version.sh@14 -- # cut -f2 00:09:14.728 19:23:40 -- app/version.sh@14 -- # tr -d '"' 00:09:14.728 19:23:40 -- app/version.sh@19 -- # patch=0 00:09:14.728 19:23:40 -- app/version.sh@20 -- # get_header_version suffix 00:09:14.728 19:23:40 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:14.728 19:23:40 -- app/version.sh@14 -- # cut -f2 00:09:14.728 19:23:40 -- app/version.sh@14 -- # tr -d '"' 00:09:14.728 19:23:40 -- app/version.sh@20 -- # suffix=-pre 00:09:14.728 19:23:40 -- app/version.sh@22 -- # version=24.5 00:09:14.728 19:23:40 -- app/version.sh@25 -- # (( patch != 0 )) 00:09:14.728 19:23:40 -- app/version.sh@28 -- # version=24.5rc0 00:09:14.728 19:23:40 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:14.728 19:23:40 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:14.988 19:23:40 -- app/version.sh@30 -- # py_version=24.5rc0 00:09:14.988 19:23:40 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:09:14.988 00:09:14.988 real 0m0.211s 00:09:14.988 user 0m0.112s 00:09:14.988 sys 0m0.149s 00:09:14.988 19:23:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:14.988 19:23:40 -- common/autotest_common.sh@10 -- # set +x 00:09:14.988 ************************************ 00:09:14.988 END TEST version 00:09:14.988 ************************************ 00:09:14.988 19:23:40 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:09:14.988 19:23:40 -- spdk/autotest.sh@194 -- # uname -s 00:09:14.988 19:23:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:14.988 19:23:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:14.988 19:23:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:14.988 19:23:40 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:09:14.988 19:23:40 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:14.988 19:23:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:14.988 19:23:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:14.988 19:23:40 -- common/autotest_common.sh@10 -- # set +x 00:09:14.988 ************************************ 00:09:14.988 START TEST blockdev_nvme 00:09:14.988 ************************************ 00:09:14.988 19:23:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:15.248 * Looking for test storage... 00:09:15.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:15.248 19:23:40 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:15.248 19:23:40 -- bdev/nbd_common.sh@6 -- # set -e 00:09:15.248 19:23:40 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:15.248 19:23:40 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:15.248 19:23:40 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:15.248 19:23:40 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:15.248 19:23:40 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:15.248 19:23:40 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:15.248 19:23:40 -- bdev/blockdev.sh@20 -- # : 00:09:15.248 19:23:40 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:09:15.248 19:23:40 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:09:15.248 19:23:40 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:09:15.248 19:23:40 -- bdev/blockdev.sh@674 -- # uname -s 00:09:15.248 19:23:40 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:09:15.248 19:23:40 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:09:15.248 19:23:40 -- bdev/blockdev.sh@682 -- # test_type=nvme 00:09:15.248 19:23:40 -- bdev/blockdev.sh@683 -- # crypto_device= 00:09:15.248 19:23:40 -- bdev/blockdev.sh@684 -- # dek= 00:09:15.248 19:23:40 -- bdev/blockdev.sh@685 -- # env_ctx= 00:09:15.248 19:23:40 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:09:15.248 19:23:40 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:09:15.248 19:23:40 -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:09:15.248 19:23:40 -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:09:15.248 19:23:40 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:09:15.248 19:23:40 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66566 00:09:15.248 19:23:40 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:15.248 19:23:40 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:15.248 19:23:40 -- bdev/blockdev.sh@49 -- # waitforlisten 66566 00:09:15.248 19:23:40 -- common/autotest_common.sh@817 -- # '[' -z 66566 ']' 00:09:15.248 19:23:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.248 19:23:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:15.248 19:23:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.248 19:23:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:15.248 19:23:40 -- common/autotest_common.sh@10 -- # set +x 00:09:15.248 [2024-04-24 19:23:40.830303] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:15.248 [2024-04-24 19:23:40.830447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66566 ] 00:09:15.508 [2024-04-24 19:23:40.976011] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.767 [2024-04-24 19:23:41.233724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.720 19:23:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:16.720 19:23:42 -- common/autotest_common.sh@850 -- # return 0 00:09:16.720 19:23:42 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:09:16.720 19:23:42 -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:09:16.720 19:23:42 -- bdev/blockdev.sh@81 -- # local json 00:09:16.720 19:23:42 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:16.720 19:23:42 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:16.720 19:23:42 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:16.720 19:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.720 19:23:42 -- common/autotest_common.sh@10 -- # set +x 00:09:16.991 19:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.991 19:23:42 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:09:16.991 19:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.991 19:23:42 -- common/autotest_common.sh@10 -- # set +x 00:09:16.991 19:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.991 19:23:42 -- bdev/blockdev.sh@740 -- # cat 00:09:16.991 19:23:42 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:09:16.991 19:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.991 19:23:42 -- common/autotest_common.sh@10 -- # set +x 00:09:16.991 19:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:16.991 19:23:42 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:09:16.991 19:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:16.991 19:23:42 -- common/autotest_common.sh@10 -- # set +x 00:09:17.266 19:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:17.266 19:23:42 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:17.266 19:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:17.266 19:23:42 -- common/autotest_common.sh@10 -- # set +x 00:09:17.266 19:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:17.266 19:23:42 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:09:17.266 19:23:42 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:09:17.266 19:23:42 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:09:17.266 19:23:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:17.266 19:23:42 -- common/autotest_common.sh@10 -- # set +x 00:09:17.266 19:23:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:17.266 19:23:42 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:09:17.266 19:23:42 -- bdev/blockdev.sh@749 -- # jq -r .name 00:09:17.266 19:23:42 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "a7c0c8ba-f535-4676-986c-8be05e47e9d8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a7c0c8ba-f535-4676-986c-8be05e47e9d8",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "657d054f-1b40-4a24-99e7-88ecb44064e3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "657d054f-1b40-4a24-99e7-88ecb44064e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "49d8b71d-937c-400d-ade9-f29e4c25f504"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "49d8b71d-937c-400d-ade9-f29e4c25f504",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "5e2d20bd-bce4-4a42-a709-01f04f810a6b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5e2d20bd-bce4-4a42-a709-01f04f810a6b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ed5ef572-bfc9-4f10-80b0-466129de4c4e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ed5ef572-bfc9-4f10-80b0-466129de4c4e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "f1f8aef3-c291-4634-b557-f14cebcd368b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f1f8aef3-c291-4634-b557-f14cebcd368b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:17.266 19:23:42 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:09:17.266 19:23:42 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:09:17.266 19:23:42 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:09:17.266 19:23:42 -- bdev/blockdev.sh@754 -- # killprocess 66566 00:09:17.266 19:23:42 -- common/autotest_common.sh@936 -- # '[' -z 66566 ']' 00:09:17.266 19:23:42 -- common/autotest_common.sh@940 -- # kill -0 66566 00:09:17.266 19:23:42 -- common/autotest_common.sh@941 -- # uname 00:09:17.266 19:23:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:17.266 19:23:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66566 00:09:17.266 19:23:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:17.266 19:23:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:17.266 killing process with pid 66566 00:09:17.266 19:23:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66566' 00:09:17.266 19:23:42 -- common/autotest_common.sh@955 -- # kill 66566 00:09:17.266 19:23:42 -- common/autotest_common.sh@960 -- # wait 66566 00:09:19.859 19:23:45 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:19.859 19:23:45 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:19.859 19:23:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:19.859 19:23:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:19.859 19:23:45 -- common/autotest_common.sh@10 -- # set +x 00:09:19.859 ************************************ 00:09:19.859 START TEST bdev_hello_world 00:09:19.859 ************************************ 00:09:19.859 19:23:45 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:20.118 [2024-04-24 19:23:45.547891] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:20.118 [2024-04-24 19:23:45.548017] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66671 ] 00:09:20.118 [2024-04-24 19:23:45.714606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.418 [2024-04-24 19:23:45.965561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.357 [2024-04-24 19:23:46.668573] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:21.357 [2024-04-24 19:23:46.668619] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:21.357 [2024-04-24 19:23:46.668651] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:21.357 [2024-04-24 19:23:46.671549] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:21.357 [2024-04-24 19:23:46.672384] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:21.357 [2024-04-24 19:23:46.672419] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:21.357 [2024-04-24 19:23:46.672604] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:21.357 00:09:21.357 [2024-04-24 19:23:46.672642] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:22.305 00:09:22.305 real 0m2.357s 00:09:22.305 user 0m2.003s 00:09:22.305 sys 0m0.247s 00:09:22.305 19:23:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:22.305 19:23:47 -- common/autotest_common.sh@10 -- # set +x 00:09:22.305 ************************************ 00:09:22.305 END TEST bdev_hello_world 00:09:22.305 ************************************ 00:09:22.305 19:23:47 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:09:22.305 19:23:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:22.305 19:23:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.305 19:23:47 -- common/autotest_common.sh@10 -- # set +x 00:09:22.306 ************************************ 00:09:22.306 START TEST bdev_bounds 00:09:22.306 ************************************ 00:09:22.306 19:23:47 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:09:22.306 19:23:47 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:22.306 19:23:47 -- bdev/blockdev.sh@290 -- # bdevio_pid=66728 00:09:22.306 19:23:47 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:22.306 Process bdevio pid: 66728 00:09:22.306 19:23:47 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66728' 00:09:22.306 19:23:47 -- bdev/blockdev.sh@293 -- # waitforlisten 66728 00:09:22.306 19:23:47 -- common/autotest_common.sh@817 -- # '[' -z 66728 ']' 00:09:22.306 19:23:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.306 19:23:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:22.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.306 19:23:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.306 19:23:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:22.306 19:23:47 -- common/autotest_common.sh@10 -- # set +x 00:09:22.565 [2024-04-24 19:23:48.013871] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:22.565 [2024-04-24 19:23:48.013974] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66728 ] 00:09:22.565 [2024-04-24 19:23:48.178740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:22.824 [2024-04-24 19:23:48.425067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.824 [2024-04-24 19:23:48.425244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.824 [2024-04-24 19:23:48.425275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.762 19:23:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:23.762 19:23:49 -- common/autotest_common.sh@850 -- # return 0 00:09:23.762 19:23:49 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:23.762 I/O targets: 00:09:23.762 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:23.762 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:23.762 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:23.762 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:23.762 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:23.762 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:23.762 00:09:23.762 00:09:23.762 CUnit - A unit testing framework for C - Version 2.1-3 00:09:23.762 http://cunit.sourceforge.net/ 00:09:23.762 00:09:23.762 00:09:23.762 Suite: bdevio tests on: Nvme3n1 00:09:23.762 Test: blockdev write read block ...passed 00:09:23.762 Test: blockdev write zeroes read block ...passed 00:09:23.762 Test: blockdev write zeroes read no split ...passed 00:09:23.762 Test: blockdev write zeroes read split ...passed 00:09:23.762 Test: blockdev write zeroes read split partial ...passed 00:09:23.762 Test: blockdev reset ...[2024-04-24 19:23:49.337228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:23.762 passed 00:09:23.762 Test: blockdev write read 8 blocks ...[2024-04-24 19:23:49.340762] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:23.762 passed 00:09:23.762 Test: blockdev write read size > 128k ...passed 00:09:23.762 Test: blockdev write read invalid size ...passed 00:09:23.762 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:23.762 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:23.762 Test: blockdev write read max offset ...passed 00:09:23.762 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:23.762 Test: blockdev writev readv 8 blocks ...passed 00:09:23.763 Test: blockdev writev readv 30 x 1block ...passed 00:09:23.763 Test: blockdev writev readv block ...passed 00:09:23.763 Test: blockdev writev readv size > 128k ...passed 00:09:23.763 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:23.763 Test: blockdev comparev and writev ...[2024-04-24 19:23:49.348824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x289c0e000 len:0x1000 00:09:23.763 [2024-04-24 19:23:49.348870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:23.763 passed 00:09:23.763 Test: blockdev nvme passthru rw ...passed 00:09:23.763 Test: blockdev nvme passthru vendor specific ...passed 00:09:23.763 Test: blockdev nvme admin passthru ...[2024-04-24 19:23:49.349483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:23.763 [2024-04-24 19:23:49.349511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:23.763 passed 00:09:23.763 Test: blockdev copy ...passed 00:09:23.763 Suite: bdevio tests on: Nvme2n3 00:09:23.763 Test: blockdev write read block ...passed 00:09:23.763 Test: blockdev write zeroes read block ...passed 00:09:23.763 Test: blockdev write zeroes read no split ...passed 00:09:23.763 Test: blockdev write zeroes read split ...passed 00:09:23.763 Test: blockdev write zeroes read split partial ...passed 00:09:23.763 Test: blockdev reset ...[2024-04-24 19:23:49.438146] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:24.022 passed 00:09:24.022 Test: blockdev write read 8 blocks ...[2024-04-24 19:23:49.442665] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:24.022 passed 00:09:24.022 Test: blockdev write read size > 128k ...passed 00:09:24.022 Test: blockdev write read invalid size ...passed 00:09:24.022 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.022 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.022 Test: blockdev write read max offset ...passed 00:09:24.023 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.023 Test: blockdev writev readv 8 blocks ...passed 00:09:24.023 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.023 Test: blockdev writev readv block ...passed 00:09:24.023 Test: blockdev writev readv size > 128k ...passed 00:09:24.023 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.023 Test: blockdev comparev and writev ...[2024-04-24 19:23:49.456628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x289c0a000 len:0x1000 00:09:24.023 [2024-04-24 19:23:49.456691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:24.023 passed 00:09:24.023 Test: blockdev nvme passthru rw ...passed 00:09:24.023 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.023 Test: blockdev nvme admin passthru ...[2024-04-24 19:23:49.457455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:24.023 [2024-04-24 19:23:49.457480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:24.023 passed 00:09:24.023 Test: blockdev copy ...passed 00:09:24.023 Suite: bdevio tests on: Nvme2n2 00:09:24.023 Test: blockdev write read block ...passed 00:09:24.023 Test: blockdev write zeroes read block ...passed 00:09:24.023 Test: blockdev write zeroes read no split ...passed 00:09:24.023 Test: blockdev write zeroes read split ...passed 00:09:24.023 Test: blockdev write zeroes read split partial ...passed 00:09:24.023 Test: blockdev reset ...[2024-04-24 19:23:49.570295] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:24.023 passed 00:09:24.023 Test: blockdev write read 8 blocks ...[2024-04-24 19:23:49.573985] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:24.023 passed 00:09:24.023 Test: blockdev write read size > 128k ...passed 00:09:24.023 Test: blockdev write read invalid size ...passed 00:09:24.023 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.023 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.023 Test: blockdev write read max offset ...passed 00:09:24.023 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.023 Test: blockdev writev readv 8 blocks ...passed 00:09:24.023 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.023 Test: blockdev writev readv block ...passed 00:09:24.023 Test: blockdev writev readv size > 128k ...passed 00:09:24.023 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.023 Test: blockdev comparev and writev ...[2024-04-24 19:23:49.582023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27e206000 len:0x1000 00:09:24.023 [2024-04-24 19:23:49.582068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:24.023 passed 00:09:24.023 Test: blockdev nvme passthru rw ...passed 00:09:24.023 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.023 Test: blockdev nvme admin passthru ...[2024-04-24 19:23:49.582771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:24.023 [2024-04-24 19:23:49.582802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:24.023 passed 00:09:24.023 Test: blockdev copy ...passed 00:09:24.023 Suite: bdevio tests on: Nvme2n1 00:09:24.023 Test: blockdev write read block ...passed 00:09:24.023 Test: blockdev write zeroes read block ...passed 00:09:24.023 Test: blockdev write zeroes read no split ...passed 00:09:24.023 Test: blockdev write zeroes read split ...passed 00:09:24.023 Test: blockdev write zeroes read split partial ...passed 00:09:24.023 Test: blockdev reset ...[2024-04-24 19:23:49.672885] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:24.023 passed 00:09:24.023 Test: blockdev write read 8 blocks ...[2024-04-24 19:23:49.676773] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:24.023 passed 00:09:24.023 Test: blockdev write read size > 128k ...passed 00:09:24.023 Test: blockdev write read invalid size ...passed 00:09:24.023 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.023 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.023 Test: blockdev write read max offset ...passed 00:09:24.023 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.023 Test: blockdev writev readv 8 blocks ...passed 00:09:24.023 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.023 Test: blockdev writev readv block ...passed 00:09:24.023 Test: blockdev writev readv size > 128k ...passed 00:09:24.023 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.023 Test: blockdev comparev and writev ...[2024-04-24 19:23:49.684236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27e201000 len:0x1000 00:09:24.023 [2024-04-24 19:23:49.684278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:24.023 passed 00:09:24.023 Test: blockdev nvme passthru rw ...passed 00:09:24.023 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.023 Test: blockdev nvme admin passthru ...[2024-04-24 19:23:49.684918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:24.023 [2024-04-24 19:23:49.684954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:24.023 passed 00:09:24.023 Test: blockdev copy ...passed 00:09:24.023 Suite: bdevio tests on: Nvme1n1 00:09:24.023 Test: blockdev write read block ...passed 00:09:24.023 Test: blockdev write zeroes read block ...passed 00:09:24.282 Test: blockdev write zeroes read no split ...passed 00:09:24.282 Test: blockdev write zeroes read split ...passed 00:09:24.282 Test: blockdev write zeroes read split partial ...passed 00:09:24.282 Test: blockdev reset ...[2024-04-24 19:23:49.788331] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:24.282 passed 00:09:24.282 Test: blockdev write read 8 blocks ...[2024-04-24 19:23:49.792234] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:24.282 passed 00:09:24.282 Test: blockdev write read size > 128k ...passed 00:09:24.282 Test: blockdev write read invalid size ...passed 00:09:24.282 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.282 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.282 Test: blockdev write read max offset ...passed 00:09:24.282 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.282 Test: blockdev writev readv 8 blocks ...passed 00:09:24.282 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.282 Test: blockdev writev readv block ...passed 00:09:24.282 Test: blockdev writev readv size > 128k ...passed 00:09:24.282 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.282 Test: blockdev comparev and writev ...[2024-04-24 19:23:49.799807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28de06000 len:0x1000 00:09:24.282 [2024-04-24 19:23:49.799855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:24.282 passed 00:09:24.282 Test: blockdev nvme passthru rw ...passed 00:09:24.282 Test: blockdev nvme passthru vendor specific ...passed 00:09:24.282 Test: blockdev nvme admin passthru ...[2024-04-24 19:23:49.800439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:24.282 [2024-04-24 19:23:49.800464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:24.282 passed 00:09:24.282 Test: blockdev copy ...passed 00:09:24.282 Suite: bdevio tests on: Nvme0n1 00:09:24.282 Test: blockdev write read block ...passed 00:09:24.282 Test: blockdev write zeroes read block ...passed 00:09:24.282 Test: blockdev write zeroes read no split ...passed 00:09:24.282 Test: blockdev write zeroes read split ...passed 00:09:24.282 Test: blockdev write zeroes read split partial ...passed 00:09:24.282 Test: blockdev reset ...[2024-04-24 19:23:49.891616] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:24.282 passed 00:09:24.282 Test: blockdev write read 8 blocks ...[2024-04-24 19:23:49.895532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:24.282 passed 00:09:24.282 Test: blockdev write read size > 128k ...passed 00:09:24.282 Test: blockdev write read invalid size ...passed 00:09:24.282 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:24.282 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:24.282 Test: blockdev write read max offset ...passed 00:09:24.282 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:24.282 Test: blockdev writev readv 8 blocks ...passed 00:09:24.282 Test: blockdev writev readv 30 x 1block ...passed 00:09:24.282 Test: blockdev writev readv block ...passed 00:09:24.282 Test: blockdev writev readv size > 128k ...passed 00:09:24.282 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:24.282 Test: blockdev comparev and writev ...passed 00:09:24.282 Test: blockdev nvme passthru rw ...[2024-04-24 19:23:49.902499] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:24.282 separate metadata which is not supported yet. 00:09:24.282 passed 00:09:24.282 Test: blockdev nvme passthru vendor specific ...[2024-04-24 19:23:49.902918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:24.282 [2024-04-24 19:23:49.902960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:09:24.282 passed 00:09:24.282 Test: blockdev nvme admin passthru ...passed 00:09:24.282 Test: blockdev copy ...passed 00:09:24.282 00:09:24.282 Run Summary: Type Total Ran Passed Failed Inactive 00:09:24.282 suites 6 6 n/a 0 0 00:09:24.282 tests 138 138 138 0 0 00:09:24.282 asserts 893 893 893 0 n/a 00:09:24.282 00:09:24.282 Elapsed time = 1.796 seconds 00:09:24.282 0 00:09:24.282 19:23:49 -- bdev/blockdev.sh@295 -- # killprocess 66728 00:09:24.282 19:23:49 -- common/autotest_common.sh@936 -- # '[' -z 66728 ']' 00:09:24.282 19:23:49 -- common/autotest_common.sh@940 -- # kill -0 66728 00:09:24.282 19:23:49 -- common/autotest_common.sh@941 -- # uname 00:09:24.282 19:23:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:24.282 19:23:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66728 00:09:24.540 killing process with pid 66728 00:09:24.540 19:23:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:24.540 19:23:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:24.540 19:23:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66728' 00:09:24.540 19:23:49 -- common/autotest_common.sh@955 -- # kill 66728 00:09:24.540 19:23:49 -- common/autotest_common.sh@960 -- # wait 66728 00:09:25.517 19:23:51 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:09:25.517 00:09:25.517 real 0m3.150s 00:09:25.517 user 0m7.796s 00:09:25.517 sys 0m0.375s 00:09:25.517 19:23:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:25.517 19:23:51 -- common/autotest_common.sh@10 -- # set +x 00:09:25.517 ************************************ 00:09:25.517 END TEST bdev_bounds 00:09:25.517 ************************************ 00:09:25.517 19:23:51 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:25.517 19:23:51 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:09:25.517 19:23:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:25.517 19:23:51 -- common/autotest_common.sh@10 -- # set +x 00:09:25.776 ************************************ 00:09:25.776 START TEST bdev_nbd 00:09:25.776 ************************************ 00:09:25.776 19:23:51 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:25.776 19:23:51 -- bdev/blockdev.sh@300 -- # uname -s 00:09:25.776 19:23:51 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:09:25.776 19:23:51 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:25.776 19:23:51 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:25.776 19:23:51 -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:25.776 19:23:51 -- bdev/blockdev.sh@304 -- # local bdev_all 00:09:25.776 19:23:51 -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:09:25.776 19:23:51 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:09:25.776 19:23:51 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:25.776 19:23:51 -- bdev/blockdev.sh@311 -- # local nbd_all 00:09:25.776 19:23:51 -- bdev/blockdev.sh@312 -- # bdev_num=6 00:09:25.776 19:23:51 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:25.776 19:23:51 -- bdev/blockdev.sh@314 -- # local nbd_list 00:09:25.776 19:23:51 -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:25.776 19:23:51 -- bdev/blockdev.sh@315 -- # local bdev_list 00:09:25.776 19:23:51 -- bdev/blockdev.sh@318 -- # nbd_pid=66797 00:09:25.776 19:23:51 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:25.776 19:23:51 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:25.776 19:23:51 -- bdev/blockdev.sh@320 -- # waitforlisten 66797 /var/tmp/spdk-nbd.sock 00:09:25.776 19:23:51 -- common/autotest_common.sh@817 -- # '[' -z 66797 ']' 00:09:25.776 19:23:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:25.776 19:23:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:25.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:25.776 19:23:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:25.776 19:23:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:25.776 19:23:51 -- common/autotest_common.sh@10 -- # set +x 00:09:25.776 [2024-04-24 19:23:51.310154] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:25.776 [2024-04-24 19:23:51.310261] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.034 [2024-04-24 19:23:51.477458] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.294 [2024-04-24 19:23:51.718553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.863 19:23:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:26.863 19:23:52 -- common/autotest_common.sh@850 -- # return 0 00:09:26.863 19:23:52 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:26.863 19:23:52 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.863 19:23:52 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:26.863 19:23:52 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:26.863 19:23:52 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:26.863 19:23:52 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:26.863 19:23:52 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:26.863 19:23:52 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:26.863 19:23:52 -- bdev/nbd_common.sh@24 -- # local i 00:09:26.863 19:23:52 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:26.863 19:23:52 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:26.863 19:23:52 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:26.863 19:23:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:27.123 19:23:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:27.123 19:23:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:27.123 19:23:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:27.123 19:23:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:09:27.123 19:23:52 -- common/autotest_common.sh@855 -- # local i 00:09:27.123 19:23:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:27.123 19:23:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:27.123 19:23:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:09:27.123 19:23:52 -- common/autotest_common.sh@859 -- # break 00:09:27.123 19:23:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:27.123 19:23:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:27.123 19:23:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.123 1+0 records in 00:09:27.123 1+0 records out 00:09:27.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0007012 s, 5.8 MB/s 00:09:27.123 19:23:52 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.123 19:23:52 -- common/autotest_common.sh@872 -- # size=4096 00:09:27.123 19:23:52 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.123 19:23:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:27.123 19:23:52 -- common/autotest_common.sh@875 -- # return 0 00:09:27.123 19:23:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:27.123 19:23:52 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:27.123 19:23:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:27.382 19:23:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:27.382 19:23:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:27.382 19:23:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:27.382 19:23:52 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:09:27.382 19:23:52 -- common/autotest_common.sh@855 -- # local i 00:09:27.382 19:23:52 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:27.382 19:23:52 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:27.382 19:23:52 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:09:27.382 19:23:52 -- common/autotest_common.sh@859 -- # break 00:09:27.382 19:23:52 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:27.382 19:23:52 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:27.382 19:23:52 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.382 1+0 records in 00:09:27.382 1+0 records out 00:09:27.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548702 s, 7.5 MB/s 00:09:27.382 19:23:52 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.382 19:23:52 -- common/autotest_common.sh@872 -- # size=4096 00:09:27.382 19:23:52 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.382 19:23:52 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:27.382 19:23:52 -- common/autotest_common.sh@875 -- # return 0 00:09:27.382 19:23:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:27.382 19:23:52 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:27.382 19:23:52 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:27.641 19:23:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:27.641 19:23:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:27.641 19:23:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:27.641 19:23:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:09:27.641 19:23:53 -- common/autotest_common.sh@855 -- # local i 00:09:27.641 19:23:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:27.641 19:23:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:27.641 19:23:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:09:27.641 19:23:53 -- common/autotest_common.sh@859 -- # break 00:09:27.641 19:23:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:27.641 19:23:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:27.641 19:23:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.641 1+0 records in 00:09:27.641 1+0 records out 00:09:27.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070273 s, 5.8 MB/s 00:09:27.641 19:23:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.641 19:23:53 -- common/autotest_common.sh@872 -- # size=4096 00:09:27.641 19:23:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.641 19:23:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:27.641 19:23:53 -- common/autotest_common.sh@875 -- # return 0 00:09:27.641 19:23:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:27.641 19:23:53 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:27.641 19:23:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:27.900 19:23:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:27.900 19:23:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:27.900 19:23:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:27.900 19:23:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:09:27.900 19:23:53 -- common/autotest_common.sh@855 -- # local i 00:09:27.900 19:23:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:27.900 19:23:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:27.900 19:23:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:09:27.900 19:23:53 -- common/autotest_common.sh@859 -- # break 00:09:27.900 19:23:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:27.900 19:23:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:27.900 19:23:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:27.900 1+0 records in 00:09:27.900 1+0 records out 00:09:27.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693548 s, 5.9 MB/s 00:09:27.900 19:23:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.900 19:23:53 -- common/autotest_common.sh@872 -- # size=4096 00:09:27.900 19:23:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:27.900 19:23:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:27.900 19:23:53 -- common/autotest_common.sh@875 -- # return 0 00:09:27.900 19:23:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:27.900 19:23:53 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:27.900 19:23:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:28.159 19:23:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:28.159 19:23:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:28.159 19:23:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:28.159 19:23:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:09:28.159 19:23:53 -- common/autotest_common.sh@855 -- # local i 00:09:28.159 19:23:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:28.159 19:23:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:28.159 19:23:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:09:28.159 19:23:53 -- common/autotest_common.sh@859 -- # break 00:09:28.159 19:23:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:28.159 19:23:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:28.159 19:23:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.159 1+0 records in 00:09:28.159 1+0 records out 00:09:28.159 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480969 s, 8.5 MB/s 00:09:28.159 19:23:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.159 19:23:53 -- common/autotest_common.sh@872 -- # size=4096 00:09:28.159 19:23:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.159 19:23:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:28.159 19:23:53 -- common/autotest_common.sh@875 -- # return 0 00:09:28.159 19:23:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:28.159 19:23:53 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:28.159 19:23:53 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:28.418 19:23:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:28.418 19:23:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:28.418 19:23:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:28.418 19:23:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:09:28.418 19:23:53 -- common/autotest_common.sh@855 -- # local i 00:09:28.418 19:23:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:28.418 19:23:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:28.418 19:23:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:09:28.418 19:23:53 -- common/autotest_common.sh@859 -- # break 00:09:28.418 19:23:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:28.418 19:23:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:28.418 19:23:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.418 1+0 records in 00:09:28.418 1+0 records out 00:09:28.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497134 s, 8.2 MB/s 00:09:28.418 19:23:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.418 19:23:53 -- common/autotest_common.sh@872 -- # size=4096 00:09:28.418 19:23:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:28.418 19:23:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:28.418 19:23:53 -- common/autotest_common.sh@875 -- # return 0 00:09:28.418 19:23:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:28.418 19:23:53 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:28.418 19:23:53 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:28.677 19:23:54 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:28.677 { 00:09:28.677 "nbd_device": "/dev/nbd0", 00:09:28.677 "bdev_name": "Nvme0n1" 00:09:28.677 }, 00:09:28.677 { 00:09:28.677 "nbd_device": "/dev/nbd1", 00:09:28.677 "bdev_name": "Nvme1n1" 00:09:28.677 }, 00:09:28.677 { 00:09:28.677 "nbd_device": "/dev/nbd2", 00:09:28.677 "bdev_name": "Nvme2n1" 00:09:28.677 }, 00:09:28.677 { 00:09:28.677 "nbd_device": "/dev/nbd3", 00:09:28.677 "bdev_name": "Nvme2n2" 00:09:28.677 }, 00:09:28.677 { 00:09:28.677 "nbd_device": "/dev/nbd4", 00:09:28.677 "bdev_name": "Nvme2n3" 00:09:28.677 }, 00:09:28.677 { 00:09:28.677 "nbd_device": "/dev/nbd5", 00:09:28.677 "bdev_name": "Nvme3n1" 00:09:28.677 } 00:09:28.677 ]' 00:09:28.677 19:23:54 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:28.677 19:23:54 -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:28.677 { 00:09:28.677 "nbd_device": "/dev/nbd0", 00:09:28.677 "bdev_name": "Nvme0n1" 00:09:28.677 }, 00:09:28.677 { 00:09:28.677 "nbd_device": "/dev/nbd1", 00:09:28.677 "bdev_name": "Nvme1n1" 00:09:28.677 }, 00:09:28.677 { 00:09:28.677 "nbd_device": "/dev/nbd2", 00:09:28.677 "bdev_name": "Nvme2n1" 00:09:28.678 }, 00:09:28.678 { 00:09:28.678 "nbd_device": "/dev/nbd3", 00:09:28.678 "bdev_name": "Nvme2n2" 00:09:28.678 }, 00:09:28.678 { 00:09:28.678 "nbd_device": "/dev/nbd4", 00:09:28.678 "bdev_name": "Nvme2n3" 00:09:28.678 }, 00:09:28.678 { 00:09:28.678 "nbd_device": "/dev/nbd5", 00:09:28.678 "bdev_name": "Nvme3n1" 00:09:28.678 } 00:09:28.678 ]' 00:09:28.678 19:23:54 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:28.678 19:23:54 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:28.678 19:23:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:28.678 19:23:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:28.678 19:23:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:28.678 19:23:54 -- bdev/nbd_common.sh@51 -- # local i 00:09:28.678 19:23:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.678 19:23:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@41 -- # break 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@41 -- # break 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@45 -- # return 0 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:28.937 19:23:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:29.196 19:23:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:29.197 19:23:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:29.197 19:23:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:29.197 19:23:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.197 19:23:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.197 19:23:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:29.197 19:23:54 -- bdev/nbd_common.sh@41 -- # break 00:09:29.197 19:23:54 -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.197 19:23:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.197 19:23:54 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:29.459 19:23:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:29.459 19:23:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:29.459 19:23:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:29.459 19:23:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.459 19:23:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.459 19:23:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:29.459 19:23:55 -- bdev/nbd_common.sh@41 -- # break 00:09:29.459 19:23:55 -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.459 19:23:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.459 19:23:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:29.724 19:23:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:29.724 19:23:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:29.724 19:23:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:29.724 19:23:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.724 19:23:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.724 19:23:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:29.724 19:23:55 -- bdev/nbd_common.sh@41 -- # break 00:09:29.724 19:23:55 -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.724 19:23:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.724 19:23:55 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:29.991 19:23:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:29.991 19:23:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:29.991 19:23:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:29.991 19:23:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.991 19:23:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.991 19:23:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:29.991 19:23:55 -- bdev/nbd_common.sh@41 -- # break 00:09:29.991 19:23:55 -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.991 19:23:55 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:29.991 19:23:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:29.991 19:23:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:29.991 19:23:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:29.991 19:23:55 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:29.991 19:23:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@65 -- # true 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@65 -- # count=0 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@122 -- # count=0 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@127 -- # return 0 00:09:30.253 19:23:55 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@12 -- # local i 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:30.253 /dev/nbd0 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:30.253 19:23:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:09:30.253 19:23:55 -- common/autotest_common.sh@855 -- # local i 00:09:30.253 19:23:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:30.253 19:23:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:30.253 19:23:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:09:30.253 19:23:55 -- common/autotest_common.sh@859 -- # break 00:09:30.253 19:23:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:30.253 19:23:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:30.253 19:23:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:30.253 1+0 records in 00:09:30.253 1+0 records out 00:09:30.253 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628803 s, 6.5 MB/s 00:09:30.253 19:23:55 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.253 19:23:55 -- common/autotest_common.sh@872 -- # size=4096 00:09:30.253 19:23:55 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.253 19:23:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:30.253 19:23:55 -- common/autotest_common.sh@875 -- # return 0 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:30.253 19:23:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:30.521 /dev/nbd1 00:09:30.521 19:23:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:30.521 19:23:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:30.521 19:23:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:09:30.521 19:23:56 -- common/autotest_common.sh@855 -- # local i 00:09:30.521 19:23:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:30.521 19:23:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:30.521 19:23:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:09:30.521 19:23:56 -- common/autotest_common.sh@859 -- # break 00:09:30.521 19:23:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:30.521 19:23:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:30.521 19:23:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:30.521 1+0 records in 00:09:30.521 1+0 records out 00:09:30.521 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494534 s, 8.3 MB/s 00:09:30.521 19:23:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.521 19:23:56 -- common/autotest_common.sh@872 -- # size=4096 00:09:30.521 19:23:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.521 19:23:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:30.521 19:23:56 -- common/autotest_common.sh@875 -- # return 0 00:09:30.521 19:23:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:30.521 19:23:56 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:30.521 19:23:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:30.796 /dev/nbd10 00:09:30.796 19:23:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:30.796 19:23:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:30.796 19:23:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:09:30.796 19:23:56 -- common/autotest_common.sh@855 -- # local i 00:09:30.796 19:23:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:30.796 19:23:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:30.796 19:23:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:09:30.796 19:23:56 -- common/autotest_common.sh@859 -- # break 00:09:30.796 19:23:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:30.796 19:23:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:30.796 19:23:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:30.796 1+0 records in 00:09:30.796 1+0 records out 00:09:30.796 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576771 s, 7.1 MB/s 00:09:30.796 19:23:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.796 19:23:56 -- common/autotest_common.sh@872 -- # size=4096 00:09:30.796 19:23:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:30.796 19:23:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:30.796 19:23:56 -- common/autotest_common.sh@875 -- # return 0 00:09:30.796 19:23:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:30.796 19:23:56 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:30.796 19:23:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:31.055 /dev/nbd11 00:09:31.055 19:23:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:31.055 19:23:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:31.055 19:23:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:09:31.055 19:23:56 -- common/autotest_common.sh@855 -- # local i 00:09:31.055 19:23:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:31.055 19:23:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:31.055 19:23:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:09:31.055 19:23:56 -- common/autotest_common.sh@859 -- # break 00:09:31.055 19:23:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:31.055 19:23:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:31.055 19:23:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.055 1+0 records in 00:09:31.055 1+0 records out 00:09:31.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713436 s, 5.7 MB/s 00:09:31.055 19:23:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.055 19:23:56 -- common/autotest_common.sh@872 -- # size=4096 00:09:31.055 19:23:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.055 19:23:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:31.055 19:23:56 -- common/autotest_common.sh@875 -- # return 0 00:09:31.055 19:23:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.055 19:23:56 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:31.055 19:23:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:31.314 /dev/nbd12 00:09:31.314 19:23:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:31.314 19:23:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:31.314 19:23:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:09:31.314 19:23:56 -- common/autotest_common.sh@855 -- # local i 00:09:31.314 19:23:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:31.314 19:23:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:31.314 19:23:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:09:31.314 19:23:56 -- common/autotest_common.sh@859 -- # break 00:09:31.314 19:23:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:31.314 19:23:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:31.314 19:23:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.314 1+0 records in 00:09:31.314 1+0 records out 00:09:31.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514203 s, 8.0 MB/s 00:09:31.314 19:23:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.314 19:23:56 -- common/autotest_common.sh@872 -- # size=4096 00:09:31.314 19:23:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.314 19:23:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:31.314 19:23:56 -- common/autotest_common.sh@875 -- # return 0 00:09:31.314 19:23:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.314 19:23:56 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:31.314 19:23:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:31.573 /dev/nbd13 00:09:31.573 19:23:57 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:31.573 19:23:57 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:31.573 19:23:57 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:09:31.573 19:23:57 -- common/autotest_common.sh@855 -- # local i 00:09:31.574 19:23:57 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:31.574 19:23:57 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:31.574 19:23:57 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:09:31.574 19:23:57 -- common/autotest_common.sh@859 -- # break 00:09:31.574 19:23:57 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:31.574 19:23:57 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:31.574 19:23:57 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:31.574 1+0 records in 00:09:31.574 1+0 records out 00:09:31.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000866456 s, 4.7 MB/s 00:09:31.574 19:23:57 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.574 19:23:57 -- common/autotest_common.sh@872 -- # size=4096 00:09:31.574 19:23:57 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:31.574 19:23:57 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:31.574 19:23:57 -- common/autotest_common.sh@875 -- # return 0 00:09:31.574 19:23:57 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:31.574 19:23:57 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:31.574 19:23:57 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:31.574 19:23:57 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:31.574 19:23:57 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd0", 00:09:31.836 "bdev_name": "Nvme0n1" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd1", 00:09:31.836 "bdev_name": "Nvme1n1" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd10", 00:09:31.836 "bdev_name": "Nvme2n1" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd11", 00:09:31.836 "bdev_name": "Nvme2n2" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd12", 00:09:31.836 "bdev_name": "Nvme2n3" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd13", 00:09:31.836 "bdev_name": "Nvme3n1" 00:09:31.836 } 00:09:31.836 ]' 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd0", 00:09:31.836 "bdev_name": "Nvme0n1" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd1", 00:09:31.836 "bdev_name": "Nvme1n1" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd10", 00:09:31.836 "bdev_name": "Nvme2n1" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd11", 00:09:31.836 "bdev_name": "Nvme2n2" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd12", 00:09:31.836 "bdev_name": "Nvme2n3" 00:09:31.836 }, 00:09:31.836 { 00:09:31.836 "nbd_device": "/dev/nbd13", 00:09:31.836 "bdev_name": "Nvme3n1" 00:09:31.836 } 00:09:31.836 ]' 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:31.836 /dev/nbd1 00:09:31.836 /dev/nbd10 00:09:31.836 /dev/nbd11 00:09:31.836 /dev/nbd12 00:09:31.836 /dev/nbd13' 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:31.836 /dev/nbd1 00:09:31.836 /dev/nbd10 00:09:31.836 /dev/nbd11 00:09:31.836 /dev/nbd12 00:09:31.836 /dev/nbd13' 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@65 -- # count=6 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@66 -- # echo 6 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@95 -- # count=6 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:31.836 256+0 records in 00:09:31.836 256+0 records out 00:09:31.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130883 s, 80.1 MB/s 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:31.836 256+0 records in 00:09:31.836 256+0 records out 00:09:31.836 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0948151 s, 11.1 MB/s 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:31.836 19:23:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:32.097 256+0 records in 00:09:32.097 256+0 records out 00:09:32.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.106671 s, 9.8 MB/s 00:09:32.097 19:23:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.097 19:23:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:32.097 256+0 records in 00:09:32.097 256+0 records out 00:09:32.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138997 s, 7.5 MB/s 00:09:32.097 19:23:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.097 19:23:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:32.097 256+0 records in 00:09:32.097 256+0 records out 00:09:32.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0986514 s, 10.6 MB/s 00:09:32.097 19:23:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.097 19:23:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:32.356 256+0 records in 00:09:32.356 256+0 records out 00:09:32.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.100072 s, 10.5 MB/s 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:32.356 256+0 records in 00:09:32.356 256+0 records out 00:09:32.356 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0967026 s, 10.8 MB/s 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.356 19:23:57 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:32.356 19:23:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.356 19:23:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:32.357 19:23:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:32.357 19:23:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:32.357 19:23:58 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:32.357 19:23:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:32.357 19:23:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:32.357 19:23:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:32.357 19:23:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:32.357 19:23:58 -- bdev/nbd_common.sh@51 -- # local i 00:09:32.357 19:23:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.357 19:23:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:32.615 19:23:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:32.615 19:23:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:32.615 19:23:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:32.615 19:23:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:32.615 19:23:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.615 19:23:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:32.615 19:23:58 -- bdev/nbd_common.sh@41 -- # break 00:09:32.615 19:23:58 -- bdev/nbd_common.sh@45 -- # return 0 00:09:32.615 19:23:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.615 19:23:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:32.874 19:23:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:32.874 19:23:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:32.874 19:23:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:32.874 19:23:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:32.874 19:23:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:32.874 19:23:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:32.874 19:23:58 -- bdev/nbd_common.sh@41 -- # break 00:09:32.874 19:23:58 -- bdev/nbd_common.sh@45 -- # return 0 00:09:32.874 19:23:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:32.874 19:23:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:33.133 19:23:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:33.133 19:23:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:33.133 19:23:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:33.133 19:23:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.133 19:23:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.133 19:23:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:33.133 19:23:58 -- bdev/nbd_common.sh@41 -- # break 00:09:33.133 19:23:58 -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.133 19:23:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.133 19:23:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:33.392 19:23:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:33.392 19:23:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:33.392 19:23:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:33.392 19:23:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.392 19:23:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.392 19:23:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:33.392 19:23:58 -- bdev/nbd_common.sh@41 -- # break 00:09:33.392 19:23:58 -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.392 19:23:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.392 19:23:58 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:33.392 19:23:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:33.392 19:23:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:33.392 19:23:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:33.392 19:23:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.392 19:23:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.392 19:23:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:33.392 19:23:59 -- bdev/nbd_common.sh@41 -- # break 00:09:33.392 19:23:59 -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.392 19:23:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:33.392 19:23:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:33.651 19:23:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:33.651 19:23:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:33.651 19:23:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:33.651 19:23:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:33.651 19:23:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:33.651 19:23:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:33.651 19:23:59 -- bdev/nbd_common.sh@41 -- # break 00:09:33.651 19:23:59 -- bdev/nbd_common.sh@45 -- # return 0 00:09:33.651 19:23:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:33.651 19:23:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.651 19:23:59 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@65 -- # true 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@65 -- # count=0 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@104 -- # count=0 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@109 -- # return 0 00:09:33.910 19:23:59 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:33.910 19:23:59 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:34.168 malloc_lvol_verify 00:09:34.168 19:23:59 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:34.425 e739c1ec-f51e-4488-a139-6758612ab7a1 00:09:34.425 19:23:59 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:34.425 3711b142-1699-40ef-9eb3-ed8cd1f976bc 00:09:34.735 19:24:00 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:34.735 /dev/nbd0 00:09:34.735 19:24:00 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:34.735 mke2fs 1.46.5 (30-Dec-2021) 00:09:34.735 Discarding device blocks: 0/4096 done 00:09:34.735 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:34.735 00:09:34.735 Allocating group tables: 0/1 done 00:09:34.735 Writing inode tables: 0/1 done 00:09:34.735 Creating journal (1024 blocks): done 00:09:34.735 Writing superblocks and filesystem accounting information: 0/1 done 00:09:34.735 00:09:34.735 19:24:00 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:34.735 19:24:00 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:34.735 19:24:00 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:34.735 19:24:00 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:34.735 19:24:00 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:34.735 19:24:00 -- bdev/nbd_common.sh@51 -- # local i 00:09:34.735 19:24:00 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:34.735 19:24:00 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:34.993 19:24:00 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:34.993 19:24:00 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:34.993 19:24:00 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:34.993 19:24:00 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:34.993 19:24:00 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:34.993 19:24:00 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:34.993 19:24:00 -- bdev/nbd_common.sh@41 -- # break 00:09:34.993 19:24:00 -- bdev/nbd_common.sh@45 -- # return 0 00:09:34.993 19:24:00 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:34.993 19:24:00 -- bdev/nbd_common.sh@147 -- # return 0 00:09:34.993 19:24:00 -- bdev/blockdev.sh@326 -- # killprocess 66797 00:09:34.993 19:24:00 -- common/autotest_common.sh@936 -- # '[' -z 66797 ']' 00:09:34.993 19:24:00 -- common/autotest_common.sh@940 -- # kill -0 66797 00:09:34.993 19:24:00 -- common/autotest_common.sh@941 -- # uname 00:09:34.993 19:24:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:34.993 19:24:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66797 00:09:34.993 19:24:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:34.993 19:24:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:34.993 19:24:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66797' 00:09:34.993 killing process with pid 66797 00:09:34.993 19:24:00 -- common/autotest_common.sh@955 -- # kill 66797 00:09:34.993 19:24:00 -- common/autotest_common.sh@960 -- # wait 66797 00:09:36.369 19:24:01 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:09:36.369 00:09:36.369 real 0m10.609s 00:09:36.369 user 0m14.373s 00:09:36.369 sys 0m3.470s 00:09:36.369 19:24:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:36.369 ************************************ 00:09:36.369 END TEST bdev_nbd 00:09:36.369 ************************************ 00:09:36.369 19:24:01 -- common/autotest_common.sh@10 -- # set +x 00:09:36.369 19:24:01 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:09:36.369 19:24:01 -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:09:36.369 19:24:01 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:36.369 skipping fio tests on NVMe due to multi-ns failures. 00:09:36.369 19:24:01 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:36.369 19:24:01 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:36.369 19:24:01 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:36.369 19:24:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.369 19:24:01 -- common/autotest_common.sh@10 -- # set +x 00:09:36.369 ************************************ 00:09:36.369 START TEST bdev_verify 00:09:36.369 ************************************ 00:09:36.369 19:24:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:36.369 [2024-04-24 19:24:02.044638] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:36.369 [2024-04-24 19:24:02.044763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67180 ] 00:09:36.627 [2024-04-24 19:24:02.210300] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:36.885 [2024-04-24 19:24:02.456645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.885 [2024-04-24 19:24:02.456706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.822 Running I/O for 5 seconds... 00:09:43.169 00:09:43.169 Latency(us) 00:09:43.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.169 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.169 Verification LBA range: start 0x0 length 0xbd0bd 00:09:43.169 Nvme0n1 : 5.05 1646.76 6.43 0.00 0.00 77466.07 15339.43 85626.08 00:09:43.169 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.169 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:43.169 Nvme0n1 : 5.08 1625.46 6.35 0.00 0.00 78479.90 9043.40 95699.73 00:09:43.169 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.169 Verification LBA range: start 0x0 length 0xa0000 00:09:43.169 Nvme1n1 : 5.05 1646.28 6.43 0.00 0.00 77328.66 14767.06 77841.89 00:09:43.169 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.169 Verification LBA range: start 0xa0000 length 0xa0000 00:09:43.169 Nvme1n1 : 5.08 1624.88 6.35 0.00 0.00 78347.84 9901.95 84710.29 00:09:43.169 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.169 Verification LBA range: start 0x0 length 0x80000 00:09:43.169 Nvme2n1 : 5.06 1645.83 6.43 0.00 0.00 77131.93 14309.17 72805.06 00:09:43.169 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.169 Verification LBA range: start 0x80000 length 0x80000 00:09:43.169 Nvme2n1 : 5.08 1624.43 6.35 0.00 0.00 78209.16 7784.19 76926.10 00:09:43.169 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.169 Verification LBA range: start 0x0 length 0x80000 00:09:43.169 Nvme2n2 : 5.08 1652.01 6.45 0.00 0.00 76679.05 8127.61 73262.95 00:09:43.169 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.169 Verification LBA range: start 0x80000 length 0x80000 00:09:43.169 Nvme2n2 : 5.08 1623.99 6.34 0.00 0.00 78058.85 7898.66 83794.50 00:09:43.169 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.169 Verification LBA range: start 0x0 length 0x80000 00:09:43.169 Nvme2n3 : 5.09 1660.99 6.49 0.00 0.00 76237.19 7841.43 76468.21 00:09:43.169 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.169 Verification LBA range: start 0x80000 length 0x80000 00:09:43.169 Nvme2n3 : 5.09 1623.53 6.34 0.00 0.00 77910.65 7955.90 91578.69 00:09:43.169 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:43.169 Verification LBA range: start 0x0 length 0x20000 00:09:43.169 Nvme3n1 : 5.09 1660.32 6.49 0.00 0.00 76113.37 8471.03 78757.67 00:09:43.169 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:43.169 Verification LBA range: start 0x20000 length 0x20000 00:09:43.169 Nvme3n1 : 5.09 1623.14 6.34 0.00 0.00 77773.19 7612.48 97989.20 00:09:43.169 =================================================================================================================== 00:09:43.169 Total : 19657.62 76.79 0.00 0.00 77471.62 7612.48 97989.20 00:09:44.545 00:09:44.545 real 0m8.089s 00:09:44.545 user 0m14.757s 00:09:44.545 sys 0m0.282s 00:09:44.545 ************************************ 00:09:44.545 END TEST bdev_verify 00:09:44.545 ************************************ 00:09:44.545 19:24:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:44.545 19:24:10 -- common/autotest_common.sh@10 -- # set +x 00:09:44.545 19:24:10 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:44.545 19:24:10 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:44.545 19:24:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:44.545 19:24:10 -- common/autotest_common.sh@10 -- # set +x 00:09:44.545 ************************************ 00:09:44.545 START TEST bdev_verify_big_io 00:09:44.545 ************************************ 00:09:44.545 19:24:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:44.805 [2024-04-24 19:24:10.235522] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:44.805 [2024-04-24 19:24:10.235646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67289 ] 00:09:44.805 [2024-04-24 19:24:10.379916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:45.063 [2024-04-24 19:24:10.640740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.064 [2024-04-24 19:24:10.640770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.006 Running I/O for 5 seconds... 00:09:52.621 00:09:52.621 Latency(us) 00:09:52.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.621 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.621 Verification LBA range: start 0x0 length 0xbd0b 00:09:52.621 Nvme0n1 : 5.70 131.90 8.24 0.00 0.00 939252.72 24840.72 1062312.80 00:09:52.621 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.621 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:52.621 Nvme0n1 : 5.58 137.73 8.61 0.00 0.00 902345.89 18659.16 864502.83 00:09:52.621 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.621 Verification LBA range: start 0x0 length 0xa000 00:09:52.621 Nvme1n1 : 5.76 137.68 8.61 0.00 0.00 873105.51 29992.02 890144.87 00:09:52.621 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.621 Verification LBA range: start 0xa000 length 0xa000 00:09:52.621 Nvme1n1 : 5.58 137.66 8.60 0.00 0.00 878823.33 57007.73 820545.06 00:09:52.621 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.621 Verification LBA range: start 0x0 length 0x8000 00:09:52.621 Nvme2n1 : 5.76 141.26 8.83 0.00 0.00 834975.17 41897.25 912123.75 00:09:52.621 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.621 Verification LBA range: start 0x8000 length 0x8000 00:09:52.621 Nvme2n1 : 5.69 138.60 8.66 0.00 0.00 845391.01 85626.08 816881.91 00:09:52.621 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.621 Verification LBA range: start 0x0 length 0x8000 00:09:52.621 Nvme2n2 : 5.80 141.44 8.84 0.00 0.00 810360.70 17171.00 1391996.09 00:09:52.621 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.621 Verification LBA range: start 0x8000 length 0x8000 00:09:52.621 Nvme2n2 : 5.71 145.74 9.11 0.00 0.00 794800.59 20261.79 974397.26 00:09:52.621 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.621 Verification LBA range: start 0x0 length 0x8000 00:09:52.621 Nvme2n3 : 5.81 151.54 9.47 0.00 0.00 736197.58 21520.99 981723.56 00:09:52.621 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.621 Verification LBA range: start 0x8000 length 0x8000 00:09:52.621 Nvme2n3 : 5.77 151.75 9.48 0.00 0.00 744418.71 51970.91 835197.65 00:09:52.621 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:52.621 Verification LBA range: start 0x0 length 0x2000 00:09:52.621 Nvme3n1 : 5.84 162.52 10.16 0.00 0.00 670782.29 6811.17 1450606.45 00:09:52.621 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:52.621 Verification LBA range: start 0x2000 length 0x2000 00:09:52.621 Nvme3n1 : 5.78 159.50 9.97 0.00 0.00 692504.86 3133.71 831534.50 00:09:52.621 =================================================================================================================== 00:09:52.621 Total : 1737.31 108.58 0.00 0.00 804383.55 3133.71 1450606.45 00:09:54.527 00:09:54.527 real 0m9.864s 00:09:54.527 user 0m18.249s 00:09:54.527 sys 0m0.310s 00:09:54.527 19:24:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:54.527 19:24:20 -- common/autotest_common.sh@10 -- # set +x 00:09:54.527 ************************************ 00:09:54.527 END TEST bdev_verify_big_io 00:09:54.527 ************************************ 00:09:54.527 19:24:20 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:54.527 19:24:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:54.527 19:24:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:54.527 19:24:20 -- common/autotest_common.sh@10 -- # set +x 00:09:54.527 ************************************ 00:09:54.527 START TEST bdev_write_zeroes 00:09:54.527 ************************************ 00:09:54.527 19:24:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:54.786 [2024-04-24 19:24:20.274780] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:54.786 [2024-04-24 19:24:20.274980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67420 ] 00:09:54.786 [2024-04-24 19:24:20.445919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.352 [2024-04-24 19:24:20.728450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.920 Running I/O for 1 seconds... 00:09:56.864 00:09:56.864 Latency(us) 00:09:56.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.864 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:56.864 Nvme0n1 : 1.02 8735.09 34.12 0.00 0.00 14621.34 4865.12 95241.84 00:09:56.864 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:56.865 Nvme1n1 : 1.02 8781.72 34.30 0.00 0.00 14525.63 9215.11 90205.01 00:09:56.865 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:56.865 Nvme2n1 : 1.02 8771.75 34.26 0.00 0.00 14441.66 8699.98 90205.01 00:09:56.865 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:56.865 Nvme2n2 : 1.02 8761.35 34.22 0.00 0.00 14421.19 8757.21 90662.90 00:09:56.865 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:56.865 Nvme2n3 : 1.02 8749.40 34.18 0.00 0.00 14400.65 9100.63 91120.80 00:09:56.865 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:56.865 Nvme3n1 : 1.03 8799.81 34.37 0.00 0.00 14274.73 9215.11 84710.29 00:09:56.865 =================================================================================================================== 00:09:56.865 Total : 52599.12 205.47 0.00 0.00 14447.14 4865.12 95241.84 00:09:58.797 00:09:58.797 real 0m3.776s 00:09:58.797 user 0m3.410s 00:09:58.797 sys 0m0.245s 00:09:58.797 19:24:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:58.797 19:24:23 -- common/autotest_common.sh@10 -- # set +x 00:09:58.797 ************************************ 00:09:58.797 END TEST bdev_write_zeroes 00:09:58.797 ************************************ 00:09:58.798 19:24:24 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:58.798 19:24:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:58.798 19:24:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:58.798 19:24:24 -- common/autotest_common.sh@10 -- # set +x 00:09:58.798 ************************************ 00:09:58.798 START TEST bdev_json_nonenclosed 00:09:58.798 ************************************ 00:09:58.798 19:24:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:58.798 [2024-04-24 19:24:24.192470] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:58.798 [2024-04-24 19:24:24.192573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67490 ] 00:09:58.798 [2024-04-24 19:24:24.355683] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.057 [2024-04-24 19:24:24.602242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.058 [2024-04-24 19:24:24.602340] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:59.058 [2024-04-24 19:24:24.602368] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:59.058 [2024-04-24 19:24:24.602380] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.627 00:09:59.627 real 0m0.951s 00:09:59.627 user 0m0.706s 00:09:59.627 sys 0m0.138s 00:09:59.627 19:24:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:59.627 19:24:25 -- common/autotest_common.sh@10 -- # set +x 00:09:59.627 ************************************ 00:09:59.627 END TEST bdev_json_nonenclosed 00:09:59.627 ************************************ 00:09:59.627 19:24:25 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:59.627 19:24:25 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:59.627 19:24:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:59.627 19:24:25 -- common/autotest_common.sh@10 -- # set +x 00:09:59.627 ************************************ 00:09:59.627 START TEST bdev_json_nonarray 00:09:59.627 ************************************ 00:09:59.627 19:24:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:59.627 [2024-04-24 19:24:25.282664] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:09:59.627 [2024-04-24 19:24:25.282765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67525 ] 00:09:59.886 [2024-04-24 19:24:25.447127] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.146 [2024-04-24 19:24:25.687229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.146 [2024-04-24 19:24:25.687333] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:00.146 [2024-04-24 19:24:25.687354] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:00.146 [2024-04-24 19:24:25.687365] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:00.716 00:10:00.716 real 0m0.968s 00:10:00.716 user 0m0.738s 00:10:00.716 sys 0m0.124s 00:10:00.716 ************************************ 00:10:00.716 END TEST bdev_json_nonarray 00:10:00.716 ************************************ 00:10:00.716 19:24:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:00.716 19:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:00.716 19:24:26 -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:10:00.716 19:24:26 -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:10:00.716 19:24:26 -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:10:00.716 19:24:26 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:10:00.716 19:24:26 -- bdev/blockdev.sh@811 -- # cleanup 00:10:00.717 19:24:26 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:00.717 19:24:26 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:00.717 19:24:26 -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:10:00.717 19:24:26 -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:10:00.717 19:24:26 -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:10:00.717 19:24:26 -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:10:00.717 00:10:00.717 real 0m45.675s 00:10:00.717 user 1m7.181s 00:10:00.717 sys 0m6.490s 00:10:00.717 19:24:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:00.717 19:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:00.717 ************************************ 00:10:00.717 END TEST blockdev_nvme 00:10:00.717 ************************************ 00:10:00.717 19:24:26 -- spdk/autotest.sh@209 -- # uname -s 00:10:00.717 19:24:26 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:10:00.717 19:24:26 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:00.717 19:24:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:00.717 19:24:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:00.717 19:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:00.717 ************************************ 00:10:00.717 START TEST blockdev_nvme_gpt 00:10:00.717 ************************************ 00:10:00.717 19:24:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:00.977 * Looking for test storage... 00:10:00.977 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:00.977 19:24:26 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:00.977 19:24:26 -- bdev/nbd_common.sh@6 -- # set -e 00:10:00.977 19:24:26 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:00.977 19:24:26 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:00.977 19:24:26 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:00.977 19:24:26 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:00.977 19:24:26 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:00.977 19:24:26 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:00.977 19:24:26 -- bdev/blockdev.sh@20 -- # : 00:10:00.977 19:24:26 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:10:00.977 19:24:26 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:10:00.977 19:24:26 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:10:00.977 19:24:26 -- bdev/blockdev.sh@674 -- # uname -s 00:10:00.977 19:24:26 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:10:00.977 19:24:26 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:10:00.977 19:24:26 -- bdev/blockdev.sh@682 -- # test_type=gpt 00:10:00.977 19:24:26 -- bdev/blockdev.sh@683 -- # crypto_device= 00:10:00.977 19:24:26 -- bdev/blockdev.sh@684 -- # dek= 00:10:00.977 19:24:26 -- bdev/blockdev.sh@685 -- # env_ctx= 00:10:00.977 19:24:26 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:10:00.977 19:24:26 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:10:00.977 19:24:26 -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:10:00.977 19:24:26 -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:10:00.977 19:24:26 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:10:00.977 19:24:26 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67606 00:10:00.977 19:24:26 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:00.977 19:24:26 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:00.977 19:24:26 -- bdev/blockdev.sh@49 -- # waitforlisten 67606 00:10:00.977 19:24:26 -- common/autotest_common.sh@817 -- # '[' -z 67606 ']' 00:10:00.977 19:24:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.977 19:24:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:00.977 19:24:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.977 19:24:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:00.977 19:24:26 -- common/autotest_common.sh@10 -- # set +x 00:10:00.977 [2024-04-24 19:24:26.633126] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:10:00.977 [2024-04-24 19:24:26.633328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67606 ] 00:10:01.236 [2024-04-24 19:24:26.797597] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.496 [2024-04-24 19:24:27.074598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.877 19:24:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:02.877 19:24:28 -- common/autotest_common.sh@850 -- # return 0 00:10:02.877 19:24:28 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:10:02.877 19:24:28 -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:10:02.877 19:24:28 -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:03.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:03.399 Waiting for block devices as requested 00:10:03.399 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:03.399 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:03.663 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:03.663 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:08.939 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:08.939 19:24:34 -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:08.939 19:24:34 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:10:08.939 19:24:34 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:10:08.939 19:24:34 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:10:08.939 19:24:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:08.939 19:24:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:10:08.939 19:24:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:10:08.939 19:24:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:08.939 19:24:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:08.939 19:24:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:08.939 19:24:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:10:08.939 19:24:34 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:10:08.939 19:24:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:08.939 19:24:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:08.939 19:24:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:08.939 19:24:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:10:08.939 19:24:34 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:10:08.939 19:24:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:08.939 19:24:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:08.939 19:24:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:08.939 19:24:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:10:08.939 19:24:34 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:10:08.939 19:24:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:08.939 19:24:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:08.939 19:24:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:08.939 19:24:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:10:08.939 19:24:34 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:10:08.939 19:24:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:08.939 19:24:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:08.939 19:24:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:08.940 19:24:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:10:08.940 19:24:34 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:10:08.940 19:24:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:08.940 19:24:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:08.940 19:24:34 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:10:08.940 19:24:34 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:10:08.940 19:24:34 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:10:08.940 19:24:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:10:08.940 19:24:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:10:08.940 19:24:34 -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:10:08.940 19:24:34 -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:10:08.940 19:24:34 -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:10:08.940 19:24:34 -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:08.940 19:24:34 -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:10:08.940 19:24:34 -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:10:08.940 19:24:34 -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:10:08.940 19:24:34 -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:10:08.940 BYT; 00:10:08.940 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:08.940 19:24:34 -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:10:08.940 BYT; 00:10:08.940 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:08.940 19:24:34 -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:10:08.940 19:24:34 -- bdev/blockdev.sh@116 -- # break 00:10:08.940 19:24:34 -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:10:08.940 19:24:34 -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:08.940 19:24:34 -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:08.940 19:24:34 -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:08.940 19:24:34 -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:10:08.940 19:24:34 -- scripts/common.sh@408 -- # local spdk_guid 00:10:08.940 19:24:34 -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:08.940 19:24:34 -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:08.940 19:24:34 -- scripts/common.sh@413 -- # IFS='()' 00:10:08.940 19:24:34 -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:10:08.940 19:24:34 -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:08.940 19:24:34 -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:08.940 19:24:34 -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:08.940 19:24:34 -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:08.940 19:24:34 -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:08.940 19:24:34 -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:10:08.940 19:24:34 -- scripts/common.sh@420 -- # local spdk_guid 00:10:08.940 19:24:34 -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:08.940 19:24:34 -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:08.940 19:24:34 -- scripts/common.sh@425 -- # IFS='()' 00:10:08.940 19:24:34 -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:10:08.940 19:24:34 -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:08.940 19:24:34 -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:08.940 19:24:34 -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:08.940 19:24:34 -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:08.940 19:24:34 -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:08.940 19:24:34 -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:10:09.878 The operation has completed successfully. 00:10:09.878 19:24:35 -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:10:10.816 The operation has completed successfully. 00:10:10.816 19:24:36 -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:11.385 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:12.321 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.321 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.321 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.321 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.321 19:24:37 -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:10:12.321 19:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.321 19:24:37 -- common/autotest_common.sh@10 -- # set +x 00:10:12.321 [] 00:10:12.321 19:24:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.321 19:24:37 -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:10:12.321 19:24:37 -- bdev/blockdev.sh@81 -- # local json 00:10:12.321 19:24:37 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:12.321 19:24:37 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:12.321 19:24:37 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:12.321 19:24:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.321 19:24:37 -- common/autotest_common.sh@10 -- # set +x 00:10:12.580 19:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.580 19:24:38 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:10:12.580 19:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.580 19:24:38 -- common/autotest_common.sh@10 -- # set +x 00:10:12.580 19:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.580 19:24:38 -- bdev/blockdev.sh@740 -- # cat 00:10:12.580 19:24:38 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:10:12.580 19:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.580 19:24:38 -- common/autotest_common.sh@10 -- # set +x 00:10:12.839 19:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.839 19:24:38 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:10:12.839 19:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.839 19:24:38 -- common/autotest_common.sh@10 -- # set +x 00:10:12.839 19:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.839 19:24:38 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:12.839 19:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.839 19:24:38 -- common/autotest_common.sh@10 -- # set +x 00:10:12.839 19:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.839 19:24:38 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:10:12.839 19:24:38 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:10:12.839 19:24:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:12.839 19:24:38 -- common/autotest_common.sh@10 -- # set +x 00:10:12.839 19:24:38 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:10:12.839 19:24:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:12.839 19:24:38 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:10:12.839 19:24:38 -- bdev/blockdev.sh@749 -- # jq -r .name 00:10:12.840 19:24:38 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "955c635a-dffe-4c2b-9b00-6050a89b8cb5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "955c635a-dffe-4c2b-9b00-6050a89b8cb5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "098c4892-566b-4293-b6ff-3c99e478fc3c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "098c4892-566b-4293-b6ff-3c99e478fc3c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f44fc674-d1bd-41b7-9949-6790da4e6e7b"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f44fc674-d1bd-41b7-9949-6790da4e6e7b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "264c81fd-764f-4c25-8208-d447e5d8a606"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "264c81fd-764f-4c25-8208-d447e5d8a606",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3f9d98ac-9c6b-44b6-9155-eb6852532bc7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3f9d98ac-9c6b-44b6-9155-eb6852532bc7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:12.840 19:24:38 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:10:12.840 19:24:38 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:10:12.840 19:24:38 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:10:12.840 19:24:38 -- bdev/blockdev.sh@754 -- # killprocess 67606 00:10:12.840 19:24:38 -- common/autotest_common.sh@936 -- # '[' -z 67606 ']' 00:10:12.840 19:24:38 -- common/autotest_common.sh@940 -- # kill -0 67606 00:10:12.840 19:24:38 -- common/autotest_common.sh@941 -- # uname 00:10:12.840 19:24:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:12.840 19:24:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67606 00:10:12.840 killing process with pid 67606 00:10:12.840 19:24:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:12.840 19:24:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:12.840 19:24:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67606' 00:10:12.840 19:24:38 -- common/autotest_common.sh@955 -- # kill 67606 00:10:12.840 19:24:38 -- common/autotest_common.sh@960 -- # wait 67606 00:10:16.127 19:24:41 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:16.127 19:24:41 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:16.127 19:24:41 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:10:16.127 19:24:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:16.127 19:24:41 -- common/autotest_common.sh@10 -- # set +x 00:10:16.127 ************************************ 00:10:16.127 START TEST bdev_hello_world 00:10:16.127 ************************************ 00:10:16.127 19:24:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:10:16.127 [2024-04-24 19:24:41.434522] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:10:16.127 [2024-04-24 19:24:41.434712] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68264 ] 00:10:16.127 [2024-04-24 19:24:41.598170] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.388 [2024-04-24 19:24:41.846944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.957 [2024-04-24 19:24:42.581595] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:16.957 [2024-04-24 19:24:42.581669] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:10:16.957 [2024-04-24 19:24:42.581696] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:16.957 [2024-04-24 19:24:42.584936] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:16.957 [2024-04-24 19:24:42.585565] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:16.957 [2024-04-24 19:24:42.585598] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:16.957 [2024-04-24 19:24:42.585788] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:16.957 00:10:16.957 [2024-04-24 19:24:42.585820] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:18.864 00:10:18.864 real 0m2.663s 00:10:18.864 user 0m2.309s 00:10:18.864 sys 0m0.245s 00:10:18.864 19:24:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:18.864 ************************************ 00:10:18.864 END TEST bdev_hello_world 00:10:18.864 ************************************ 00:10:18.864 19:24:44 -- common/autotest_common.sh@10 -- # set +x 00:10:18.864 19:24:44 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:10:18.864 19:24:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:18.864 19:24:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:18.864 19:24:44 -- common/autotest_common.sh@10 -- # set +x 00:10:18.864 ************************************ 00:10:18.864 START TEST bdev_bounds 00:10:18.864 ************************************ 00:10:18.864 19:24:44 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:10:18.864 Process bdevio pid: 68311 00:10:18.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.864 19:24:44 -- bdev/blockdev.sh@290 -- # bdevio_pid=68311 00:10:18.864 19:24:44 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:18.864 19:24:44 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:18.864 19:24:44 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68311' 00:10:18.864 19:24:44 -- bdev/blockdev.sh@293 -- # waitforlisten 68311 00:10:18.864 19:24:44 -- common/autotest_common.sh@817 -- # '[' -z 68311 ']' 00:10:18.864 19:24:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.864 19:24:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:18.864 19:24:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.864 19:24:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:18.864 19:24:44 -- common/autotest_common.sh@10 -- # set +x 00:10:18.864 [2024-04-24 19:24:44.263681] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:10:18.864 [2024-04-24 19:24:44.263954] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68311 ] 00:10:18.864 [2024-04-24 19:24:44.439201] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:19.122 [2024-04-24 19:24:44.712463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.122 [2024-04-24 19:24:44.712481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.122 [2024-04-24 19:24:44.712484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.059 19:24:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:20.059 19:24:45 -- common/autotest_common.sh@850 -- # return 0 00:10:20.059 19:24:45 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:20.059 I/O targets: 00:10:20.059 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:10:20.059 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:10:20.059 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:10:20.059 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:20.059 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:20.059 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:20.059 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:20.059 00:10:20.059 00:10:20.059 CUnit - A unit testing framework for C - Version 2.1-3 00:10:20.059 http://cunit.sourceforge.net/ 00:10:20.059 00:10:20.059 00:10:20.059 Suite: bdevio tests on: Nvme3n1 00:10:20.059 Test: blockdev write read block ...passed 00:10:20.059 Test: blockdev write zeroes read block ...passed 00:10:20.059 Test: blockdev write zeroes read no split ...passed 00:10:20.059 Test: blockdev write zeroes read split ...passed 00:10:20.059 Test: blockdev write zeroes read split partial ...passed 00:10:20.059 Test: blockdev reset ...[2024-04-24 19:24:45.714703] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:10:20.059 [2024-04-24 19:24:45.718661] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:20.059 passed 00:10:20.059 Test: blockdev write read 8 blocks ...passed 00:10:20.059 Test: blockdev write read size > 128k ...passed 00:10:20.059 Test: blockdev write read invalid size ...passed 00:10:20.059 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.059 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.059 Test: blockdev write read max offset ...passed 00:10:20.059 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.059 Test: blockdev writev readv 8 blocks ...passed 00:10:20.059 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.059 Test: blockdev writev readv block ...passed 00:10:20.059 Test: blockdev writev readv size > 128k ...passed 00:10:20.059 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.059 Test: blockdev comparev and writev ...[2024-04-24 19:24:45.727052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28420a000 len:0x1000 00:10:20.059 [2024-04-24 19:24:45.727193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:20.059 passed 00:10:20.059 Test: blockdev nvme passthru rw ...passed 00:10:20.059 Test: blockdev nvme passthru vendor specific ...[2024-04-24 19:24:45.728040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:20.059 [2024-04-24 19:24:45.728134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:20.059 passed 00:10:20.059 Test: blockdev nvme admin passthru ...passed 00:10:20.059 Test: blockdev copy ...passed 00:10:20.059 Suite: bdevio tests on: Nvme2n3 00:10:20.059 Test: blockdev write read block ...passed 00:10:20.318 Test: blockdev write zeroes read block ...passed 00:10:20.318 Test: blockdev write zeroes read no split ...passed 00:10:20.318 Test: blockdev write zeroes read split ...passed 00:10:20.318 Test: blockdev write zeroes read split partial ...passed 00:10:20.318 Test: blockdev reset ...[2024-04-24 19:24:45.820489] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:20.318 [2024-04-24 19:24:45.824618] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:20.318 passed 00:10:20.318 Test: blockdev write read 8 blocks ...passed 00:10:20.318 Test: blockdev write read size > 128k ...passed 00:10:20.318 Test: blockdev write read invalid size ...passed 00:10:20.318 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.318 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.318 Test: blockdev write read max offset ...passed 00:10:20.318 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.318 Test: blockdev writev readv 8 blocks ...passed 00:10:20.318 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.318 Test: blockdev writev readv block ...passed 00:10:20.318 Test: blockdev writev readv size > 128k ...passed 00:10:20.318 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.318 Test: blockdev comparev and writev ...[2024-04-24 19:24:45.833721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x263104000 len:0x1000 00:10:20.318 [2024-04-24 19:24:45.833785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:20.318 passed 00:10:20.318 Test: blockdev nvme passthru rw ...passed 00:10:20.318 Test: blockdev nvme passthru vendor specific ...passed 00:10:20.318 Test: blockdev nvme admin passthru ...[2024-04-24 19:24:45.834404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:20.318 [2024-04-24 19:24:45.834449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:20.318 passed 00:10:20.318 Test: blockdev copy ...passed 00:10:20.318 Suite: bdevio tests on: Nvme2n2 00:10:20.318 Test: blockdev write read block ...passed 00:10:20.318 Test: blockdev write zeroes read block ...passed 00:10:20.318 Test: blockdev write zeroes read no split ...passed 00:10:20.318 Test: blockdev write zeroes read split ...passed 00:10:20.318 Test: blockdev write zeroes read split partial ...passed 00:10:20.318 Test: blockdev reset ...[2024-04-24 19:24:45.930476] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:20.318 [2024-04-24 19:24:45.934809] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:20.318 passed 00:10:20.318 Test: blockdev write read 8 blocks ...passed 00:10:20.318 Test: blockdev write read size > 128k ...passed 00:10:20.318 Test: blockdev write read invalid size ...passed 00:10:20.318 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.318 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.318 Test: blockdev write read max offset ...passed 00:10:20.318 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.318 Test: blockdev writev readv 8 blocks ...passed 00:10:20.318 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.318 Test: blockdev writev readv block ...passed 00:10:20.318 Test: blockdev writev readv size > 128k ...passed 00:10:20.318 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.318 Test: blockdev comparev and writev ...[2024-04-24 19:24:45.944340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x263104000 len:0x1000 00:10:20.318 [2024-04-24 19:24:45.944504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:20.318 passed 00:10:20.318 Test: blockdev nvme passthru rw ...passed 00:10:20.318 Test: blockdev nvme passthru vendor specific ...[2024-04-24 19:24:45.945416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:20.318 [2024-04-24 19:24:45.945506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:20.318 passed 00:10:20.318 Test: blockdev nvme admin passthru ...passed 00:10:20.318 Test: blockdev copy ...passed 00:10:20.318 Suite: bdevio tests on: Nvme2n1 00:10:20.318 Test: blockdev write read block ...passed 00:10:20.318 Test: blockdev write zeroes read block ...passed 00:10:20.318 Test: blockdev write zeroes read no split ...passed 00:10:20.578 Test: blockdev write zeroes read split ...passed 00:10:20.578 Test: blockdev write zeroes read split partial ...passed 00:10:20.578 Test: blockdev reset ...[2024-04-24 19:24:46.036613] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:10:20.578 [2024-04-24 19:24:46.040894] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:20.578 passed 00:10:20.578 Test: blockdev write read 8 blocks ...passed 00:10:20.578 Test: blockdev write read size > 128k ...passed 00:10:20.578 Test: blockdev write read invalid size ...passed 00:10:20.578 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.578 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.578 Test: blockdev write read max offset ...passed 00:10:20.578 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.578 Test: blockdev writev readv 8 blocks ...passed 00:10:20.578 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.578 Test: blockdev writev readv block ...passed 00:10:20.578 Test: blockdev writev readv size > 128k ...passed 00:10:20.578 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.578 Test: blockdev comparev and writev ...[2024-04-24 19:24:46.049149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x292c3c000 len:0x1000 00:10:20.578 [2024-04-24 19:24:46.049266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:20.578 passed 00:10:20.578 Test: blockdev nvme passthru rw ...passed 00:10:20.578 Test: blockdev nvme passthru vendor specific ...[2024-04-24 19:24:46.050079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:20.578 [2024-04-24 19:24:46.050169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:20.578 passed 00:10:20.578 Test: blockdev nvme admin passthru ...passed 00:10:20.578 Test: blockdev copy ...passed 00:10:20.578 Suite: bdevio tests on: Nvme1n1 00:10:20.578 Test: blockdev write read block ...passed 00:10:20.578 Test: blockdev write zeroes read block ...passed 00:10:20.578 Test: blockdev write zeroes read no split ...passed 00:10:20.578 Test: blockdev write zeroes read split ...passed 00:10:20.578 Test: blockdev write zeroes read split partial ...passed 00:10:20.578 Test: blockdev reset ...[2024-04-24 19:24:46.140907] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:10:20.578 [2024-04-24 19:24:46.144951] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:20.578 passed 00:10:20.578 Test: blockdev write read 8 blocks ...passed 00:10:20.578 Test: blockdev write read size > 128k ...passed 00:10:20.578 Test: blockdev write read invalid size ...passed 00:10:20.578 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.578 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.578 Test: blockdev write read max offset ...passed 00:10:20.578 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.578 Test: blockdev writev readv 8 blocks ...passed 00:10:20.578 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.578 Test: blockdev writev readv block ...passed 00:10:20.578 Test: blockdev writev readv size > 128k ...passed 00:10:20.578 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.578 Test: blockdev comparev and writev ...[2024-04-24 19:24:46.153517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x292c38000 len:0x1000 00:10:20.578 [2024-04-24 19:24:46.153645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:20.578 passed 00:10:20.578 Test: blockdev nvme passthru rw ...passed 00:10:20.578 Test: blockdev nvme passthru vendor specific ...[2024-04-24 19:24:46.154494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:20.578 [2024-04-24 19:24:46.154584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:20.578 passed 00:10:20.578 Test: blockdev nvme admin passthru ...passed 00:10:20.578 Test: blockdev copy ...passed 00:10:20.578 Suite: bdevio tests on: Nvme0n1p2 00:10:20.578 Test: blockdev write read block ...passed 00:10:20.578 Test: blockdev write zeroes read block ...passed 00:10:20.578 Test: blockdev write zeroes read no split ...passed 00:10:20.578 Test: blockdev write zeroes read split ...passed 00:10:20.578 Test: blockdev write zeroes read split partial ...passed 00:10:20.578 Test: blockdev reset ...[2024-04-24 19:24:46.251076] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:20.578 [2024-04-24 19:24:46.254975] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:20.578 passed 00:10:20.837 Test: blockdev write read 8 blocks ...passed 00:10:20.837 Test: blockdev write read size > 128k ...passed 00:10:20.837 Test: blockdev write read invalid size ...passed 00:10:20.837 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.837 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.837 Test: blockdev write read max offset ...passed 00:10:20.837 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.837 Test: blockdev writev readv 8 blocks ...passed 00:10:20.838 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.838 Test: blockdev writev readv block ...passed 00:10:20.838 Test: blockdev writev readv size > 128k ...passed 00:10:20.838 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.838 Test: blockdev comparev and writev ...[2024-04-24 19:24:46.261852] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2passed since it has 00:10:20.838 separate metadata which is not supported yet. 00:10:20.838 00:10:20.838 Test: blockdev nvme passthru rw ...passed 00:10:20.838 Test: blockdev nvme passthru vendor specific ...passed 00:10:20.838 Test: blockdev nvme admin passthru ...passed 00:10:20.838 Test: blockdev copy ...passed 00:10:20.838 Suite: bdevio tests on: Nvme0n1p1 00:10:20.838 Test: blockdev write read block ...passed 00:10:20.838 Test: blockdev write zeroes read block ...passed 00:10:20.838 Test: blockdev write zeroes read no split ...passed 00:10:20.838 Test: blockdev write zeroes read split ...passed 00:10:20.838 Test: blockdev write zeroes read split partial ...passed 00:10:20.838 Test: blockdev reset ...[2024-04-24 19:24:46.341738] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:20.838 [2024-04-24 19:24:46.345438] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:20.838 passed 00:10:20.838 Test: blockdev write read 8 blocks ...passed 00:10:20.838 Test: blockdev write read size > 128k ...passed 00:10:20.838 Test: blockdev write read invalid size ...passed 00:10:20.838 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:20.838 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:20.838 Test: blockdev write read max offset ...passed 00:10:20.838 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:20.838 Test: blockdev writev readv 8 blocks ...passed 00:10:20.838 Test: blockdev writev readv 30 x 1block ...passed 00:10:20.838 Test: blockdev writev readv block ...passed 00:10:20.838 Test: blockdev writev readv size > 128k ...passed 00:10:20.838 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:20.838 Test: blockdev comparev and writev ...[2024-04-24 19:24:46.352435] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1passed since it has 00:10:20.838 separate metadata which is not supported yet. 00:10:20.838 00:10:20.838 Test: blockdev nvme passthru rw ...passed 00:10:20.838 Test: blockdev nvme passthru vendor specific ...passed 00:10:20.838 Test: blockdev nvme admin passthru ...passed 00:10:20.838 Test: blockdev copy ...passed 00:10:20.838 00:10:20.838 Run Summary: Type Total Ran Passed Failed Inactive 00:10:20.838 suites 7 7 n/a 0 0 00:10:20.838 tests 161 161 161 0 0 00:10:20.838 asserts 1006 1006 1006 0 n/a 00:10:20.838 00:10:20.838 Elapsed time = 2.012 seconds 00:10:20.838 0 00:10:20.838 19:24:46 -- bdev/blockdev.sh@295 -- # killprocess 68311 00:10:20.838 19:24:46 -- common/autotest_common.sh@936 -- # '[' -z 68311 ']' 00:10:20.838 19:24:46 -- common/autotest_common.sh@940 -- # kill -0 68311 00:10:20.838 19:24:46 -- common/autotest_common.sh@941 -- # uname 00:10:20.838 19:24:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:20.838 19:24:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68311 00:10:20.838 19:24:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:20.838 19:24:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:20.838 19:24:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68311' 00:10:20.838 killing process with pid 68311 00:10:20.838 19:24:46 -- common/autotest_common.sh@955 -- # kill 68311 00:10:20.838 19:24:46 -- common/autotest_common.sh@960 -- # wait 68311 00:10:22.215 19:24:47 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:10:22.215 00:10:22.215 real 0m3.493s 00:10:22.215 user 0m8.693s 00:10:22.215 sys 0m0.423s 00:10:22.215 19:24:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:22.215 19:24:47 -- common/autotest_common.sh@10 -- # set +x 00:10:22.215 ************************************ 00:10:22.215 END TEST bdev_bounds 00:10:22.215 ************************************ 00:10:22.215 19:24:47 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:22.215 19:24:47 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:10:22.215 19:24:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:22.215 19:24:47 -- common/autotest_common.sh@10 -- # set +x 00:10:22.215 ************************************ 00:10:22.215 START TEST bdev_nbd 00:10:22.215 ************************************ 00:10:22.215 19:24:47 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:22.215 19:24:47 -- bdev/blockdev.sh@300 -- # uname -s 00:10:22.215 19:24:47 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:10:22.215 19:24:47 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:22.215 19:24:47 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:22.215 19:24:47 -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:22.215 19:24:47 -- bdev/blockdev.sh@304 -- # local bdev_all 00:10:22.215 19:24:47 -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:10:22.215 19:24:47 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:10:22.215 19:24:47 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:22.215 19:24:47 -- bdev/blockdev.sh@311 -- # local nbd_all 00:10:22.215 19:24:47 -- bdev/blockdev.sh@312 -- # bdev_num=7 00:10:22.215 19:24:47 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:22.215 19:24:47 -- bdev/blockdev.sh@314 -- # local nbd_list 00:10:22.215 19:24:47 -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:22.215 19:24:47 -- bdev/blockdev.sh@315 -- # local bdev_list 00:10:22.215 19:24:47 -- bdev/blockdev.sh@318 -- # nbd_pid=68386 00:10:22.215 19:24:47 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:22.216 19:24:47 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:22.216 19:24:47 -- bdev/blockdev.sh@320 -- # waitforlisten 68386 /var/tmp/spdk-nbd.sock 00:10:22.216 19:24:47 -- common/autotest_common.sh@817 -- # '[' -z 68386 ']' 00:10:22.216 19:24:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:22.216 19:24:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:22.216 19:24:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:22.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:22.216 19:24:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:22.216 19:24:47 -- common/autotest_common.sh@10 -- # set +x 00:10:22.474 [2024-04-24 19:24:47.912127] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:10:22.474 [2024-04-24 19:24:47.912444] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.474 [2024-04-24 19:24:48.083481] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.732 [2024-04-24 19:24:48.348833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.677 19:24:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:23.677 19:24:49 -- common/autotest_common.sh@850 -- # return 0 00:10:23.677 19:24:49 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:23.677 19:24:49 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:23.677 19:24:49 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:23.677 19:24:49 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:23.677 19:24:49 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:23.677 19:24:49 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:23.677 19:24:49 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:23.677 19:24:49 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:23.677 19:24:49 -- bdev/nbd_common.sh@24 -- # local i 00:10:23.677 19:24:49 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:23.677 19:24:49 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:23.677 19:24:49 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:23.677 19:24:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:10:23.935 19:24:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:23.935 19:24:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:23.935 19:24:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:23.935 19:24:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:10:23.935 19:24:49 -- common/autotest_common.sh@855 -- # local i 00:10:23.935 19:24:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:23.935 19:24:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:23.935 19:24:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:10:23.935 19:24:49 -- common/autotest_common.sh@859 -- # break 00:10:23.935 19:24:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:23.935 19:24:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:23.935 19:24:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:23.935 1+0 records in 00:10:23.935 1+0 records out 00:10:23.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742132 s, 5.5 MB/s 00:10:23.936 19:24:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.936 19:24:49 -- common/autotest_common.sh@872 -- # size=4096 00:10:23.936 19:24:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:23.936 19:24:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:23.936 19:24:49 -- common/autotest_common.sh@875 -- # return 0 00:10:23.936 19:24:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:23.936 19:24:49 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:23.936 19:24:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:10:24.194 19:24:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:24.194 19:24:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:24.194 19:24:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:24.194 19:24:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:10:24.194 19:24:49 -- common/autotest_common.sh@855 -- # local i 00:10:24.194 19:24:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:24.194 19:24:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:24.194 19:24:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:10:24.194 19:24:49 -- common/autotest_common.sh@859 -- # break 00:10:24.194 19:24:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:24.194 19:24:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:24.194 19:24:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:24.194 1+0 records in 00:10:24.194 1+0 records out 00:10:24.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000659865 s, 6.2 MB/s 00:10:24.194 19:24:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.195 19:24:49 -- common/autotest_common.sh@872 -- # size=4096 00:10:24.195 19:24:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.195 19:24:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:24.195 19:24:49 -- common/autotest_common.sh@875 -- # return 0 00:10:24.195 19:24:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:24.195 19:24:49 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:24.195 19:24:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:10:24.453 19:24:49 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:24.453 19:24:49 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:24.453 19:24:49 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:24.453 19:24:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:10:24.453 19:24:49 -- common/autotest_common.sh@855 -- # local i 00:10:24.453 19:24:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:24.453 19:24:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:24.453 19:24:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:10:24.453 19:24:49 -- common/autotest_common.sh@859 -- # break 00:10:24.453 19:24:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:24.453 19:24:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:24.453 19:24:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:24.453 1+0 records in 00:10:24.453 1+0 records out 00:10:24.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592567 s, 6.9 MB/s 00:10:24.453 19:24:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.453 19:24:49 -- common/autotest_common.sh@872 -- # size=4096 00:10:24.453 19:24:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.453 19:24:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:24.453 19:24:49 -- common/autotest_common.sh@875 -- # return 0 00:10:24.453 19:24:49 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:24.453 19:24:49 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:24.453 19:24:49 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:24.710 19:24:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:24.710 19:24:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:24.710 19:24:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:24.710 19:24:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:10:24.710 19:24:50 -- common/autotest_common.sh@855 -- # local i 00:10:24.710 19:24:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:24.710 19:24:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:24.710 19:24:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:10:24.710 19:24:50 -- common/autotest_common.sh@859 -- # break 00:10:24.710 19:24:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:24.710 19:24:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:24.710 19:24:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:24.710 1+0 records in 00:10:24.710 1+0 records out 00:10:24.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700691 s, 5.8 MB/s 00:10:24.710 19:24:50 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.710 19:24:50 -- common/autotest_common.sh@872 -- # size=4096 00:10:24.710 19:24:50 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.710 19:24:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:24.710 19:24:50 -- common/autotest_common.sh@875 -- # return 0 00:10:24.710 19:24:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:24.710 19:24:50 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:24.710 19:24:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:24.968 19:24:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:24.968 19:24:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:24.968 19:24:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:24.968 19:24:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:10:24.968 19:24:50 -- common/autotest_common.sh@855 -- # local i 00:10:24.968 19:24:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:24.968 19:24:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:24.968 19:24:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:10:24.968 19:24:50 -- common/autotest_common.sh@859 -- # break 00:10:24.968 19:24:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:24.968 19:24:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:24.968 19:24:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:24.968 1+0 records in 00:10:24.968 1+0 records out 00:10:24.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000770079 s, 5.3 MB/s 00:10:24.968 19:24:50 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.968 19:24:50 -- common/autotest_common.sh@872 -- # size=4096 00:10:24.968 19:24:50 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:24.968 19:24:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:24.968 19:24:50 -- common/autotest_common.sh@875 -- # return 0 00:10:24.968 19:24:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:24.968 19:24:50 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:24.968 19:24:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:25.226 19:24:50 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:25.226 19:24:50 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:25.226 19:24:50 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:25.226 19:24:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:10:25.226 19:24:50 -- common/autotest_common.sh@855 -- # local i 00:10:25.226 19:24:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:25.226 19:24:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:25.226 19:24:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:10:25.226 19:24:50 -- common/autotest_common.sh@859 -- # break 00:10:25.226 19:24:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:25.226 19:24:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:25.226 19:24:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:25.227 1+0 records in 00:10:25.227 1+0 records out 00:10:25.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000736735 s, 5.6 MB/s 00:10:25.227 19:24:50 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.227 19:24:50 -- common/autotest_common.sh@872 -- # size=4096 00:10:25.227 19:24:50 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.227 19:24:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:25.227 19:24:50 -- common/autotest_common.sh@875 -- # return 0 00:10:25.227 19:24:50 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:25.227 19:24:50 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:25.227 19:24:50 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:25.484 19:24:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:25.484 19:24:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:25.484 19:24:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:25.484 19:24:51 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:10:25.484 19:24:51 -- common/autotest_common.sh@855 -- # local i 00:10:25.484 19:24:51 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:25.484 19:24:51 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:25.484 19:24:51 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:10:25.484 19:24:51 -- common/autotest_common.sh@859 -- # break 00:10:25.484 19:24:51 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:25.484 19:24:51 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:25.484 19:24:51 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:25.484 1+0 records in 00:10:25.484 1+0 records out 00:10:25.484 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000900153 s, 4.6 MB/s 00:10:25.484 19:24:51 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.484 19:24:51 -- common/autotest_common.sh@872 -- # size=4096 00:10:25.484 19:24:51 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:25.484 19:24:51 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:25.484 19:24:51 -- common/autotest_common.sh@875 -- # return 0 00:10:25.484 19:24:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:25.484 19:24:51 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:25.484 19:24:51 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:25.742 19:24:51 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:25.742 { 00:10:25.742 "nbd_device": "/dev/nbd0", 00:10:25.742 "bdev_name": "Nvme0n1p1" 00:10:25.742 }, 00:10:25.742 { 00:10:25.742 "nbd_device": "/dev/nbd1", 00:10:25.742 "bdev_name": "Nvme0n1p2" 00:10:25.742 }, 00:10:25.742 { 00:10:25.742 "nbd_device": "/dev/nbd2", 00:10:25.742 "bdev_name": "Nvme1n1" 00:10:25.742 }, 00:10:25.742 { 00:10:25.742 "nbd_device": "/dev/nbd3", 00:10:25.742 "bdev_name": "Nvme2n1" 00:10:25.742 }, 00:10:25.742 { 00:10:25.742 "nbd_device": "/dev/nbd4", 00:10:25.742 "bdev_name": "Nvme2n2" 00:10:25.742 }, 00:10:25.742 { 00:10:25.742 "nbd_device": "/dev/nbd5", 00:10:25.742 "bdev_name": "Nvme2n3" 00:10:25.742 }, 00:10:25.742 { 00:10:25.742 "nbd_device": "/dev/nbd6", 00:10:25.742 "bdev_name": "Nvme3n1" 00:10:25.742 } 00:10:25.742 ]' 00:10:25.742 19:24:51 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:25.742 19:24:51 -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:25.742 { 00:10:25.742 "nbd_device": "/dev/nbd0", 00:10:25.742 "bdev_name": "Nvme0n1p1" 00:10:25.742 }, 00:10:25.742 { 00:10:25.742 "nbd_device": "/dev/nbd1", 00:10:25.742 "bdev_name": "Nvme0n1p2" 00:10:25.742 }, 00:10:25.742 { 00:10:25.742 "nbd_device": "/dev/nbd2", 00:10:25.742 "bdev_name": "Nvme1n1" 00:10:25.742 }, 00:10:25.742 { 00:10:25.742 "nbd_device": "/dev/nbd3", 00:10:25.742 "bdev_name": "Nvme2n1" 00:10:25.742 }, 00:10:25.742 { 00:10:25.742 "nbd_device": "/dev/nbd4", 00:10:25.742 "bdev_name": "Nvme2n2" 00:10:25.742 }, 00:10:25.742 { 00:10:25.742 "nbd_device": "/dev/nbd5", 00:10:25.742 "bdev_name": "Nvme2n3" 00:10:25.742 }, 00:10:25.742 { 00:10:25.742 "nbd_device": "/dev/nbd6", 00:10:25.742 "bdev_name": "Nvme3n1" 00:10:25.742 } 00:10:25.742 ]' 00:10:25.742 19:24:51 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:25.742 19:24:51 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:25.742 19:24:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:25.742 19:24:51 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:25.742 19:24:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:25.742 19:24:51 -- bdev/nbd_common.sh@51 -- # local i 00:10:25.742 19:24:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:25.742 19:24:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:26.000 19:24:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:26.000 19:24:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:26.000 19:24:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:26.000 19:24:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:26.000 19:24:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:26.000 19:24:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:26.000 19:24:51 -- bdev/nbd_common.sh@41 -- # break 00:10:26.000 19:24:51 -- bdev/nbd_common.sh@45 -- # return 0 00:10:26.000 19:24:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:26.000 19:24:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:26.258 19:24:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:26.258 19:24:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:26.258 19:24:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:26.258 19:24:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:26.258 19:24:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:26.258 19:24:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:26.258 19:24:51 -- bdev/nbd_common.sh@41 -- # break 00:10:26.258 19:24:51 -- bdev/nbd_common.sh@45 -- # return 0 00:10:26.258 19:24:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:26.258 19:24:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:26.517 19:24:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:26.517 19:24:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:26.517 19:24:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:26.517 19:24:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:26.517 19:24:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:26.517 19:24:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:26.517 19:24:52 -- bdev/nbd_common.sh@41 -- # break 00:10:26.517 19:24:52 -- bdev/nbd_common.sh@45 -- # return 0 00:10:26.517 19:24:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:26.518 19:24:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:26.775 19:24:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:26.775 19:24:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:26.775 19:24:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:26.775 19:24:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:26.775 19:24:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:26.775 19:24:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:26.775 19:24:52 -- bdev/nbd_common.sh@41 -- # break 00:10:26.775 19:24:52 -- bdev/nbd_common.sh@45 -- # return 0 00:10:26.775 19:24:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:26.775 19:24:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@41 -- # break 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@45 -- # return 0 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@41 -- # break 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@45 -- # return 0 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:27.342 19:24:52 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:27.602 19:24:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:27.602 19:24:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:27.602 19:24:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:27.602 19:24:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:27.602 19:24:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:27.602 19:24:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:27.602 19:24:53 -- bdev/nbd_common.sh@41 -- # break 00:10:27.602 19:24:53 -- bdev/nbd_common.sh@45 -- # return 0 00:10:27.602 19:24:53 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:27.602 19:24:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.602 19:24:53 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@65 -- # true 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@65 -- # count=0 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@122 -- # count=0 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@127 -- # return 0 00:10:27.861 19:24:53 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@12 -- # local i 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:27.861 19:24:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:10:28.120 /dev/nbd0 00:10:28.120 19:24:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:28.120 19:24:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:28.120 19:24:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:10:28.120 19:24:53 -- common/autotest_common.sh@855 -- # local i 00:10:28.120 19:24:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:28.121 19:24:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:28.121 19:24:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:10:28.121 19:24:53 -- common/autotest_common.sh@859 -- # break 00:10:28.121 19:24:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:28.121 19:24:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:28.121 19:24:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:28.121 1+0 records in 00:10:28.121 1+0 records out 00:10:28.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000844629 s, 4.8 MB/s 00:10:28.121 19:24:53 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.121 19:24:53 -- common/autotest_common.sh@872 -- # size=4096 00:10:28.121 19:24:53 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.121 19:24:53 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:28.121 19:24:53 -- common/autotest_common.sh@875 -- # return 0 00:10:28.121 19:24:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.121 19:24:53 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:28.121 19:24:53 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:10:28.379 /dev/nbd1 00:10:28.379 19:24:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:28.379 19:24:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:28.379 19:24:53 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:10:28.379 19:24:53 -- common/autotest_common.sh@855 -- # local i 00:10:28.379 19:24:53 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:28.379 19:24:53 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:28.379 19:24:53 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:10:28.379 19:24:53 -- common/autotest_common.sh@859 -- # break 00:10:28.379 19:24:53 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:28.379 19:24:53 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:28.379 19:24:53 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:28.379 1+0 records in 00:10:28.379 1+0 records out 00:10:28.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701548 s, 5.8 MB/s 00:10:28.379 19:24:54 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.379 19:24:54 -- common/autotest_common.sh@872 -- # size=4096 00:10:28.379 19:24:54 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.379 19:24:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:28.379 19:24:54 -- common/autotest_common.sh@875 -- # return 0 00:10:28.379 19:24:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.379 19:24:54 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:28.379 19:24:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:10:28.638 /dev/nbd10 00:10:28.897 19:24:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:28.897 19:24:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:28.897 19:24:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:10:28.897 19:24:54 -- common/autotest_common.sh@855 -- # local i 00:10:28.897 19:24:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:28.897 19:24:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:28.897 19:24:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:10:28.897 19:24:54 -- common/autotest_common.sh@859 -- # break 00:10:28.897 19:24:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:28.897 19:24:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:28.897 19:24:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:28.897 1+0 records in 00:10:28.897 1+0 records out 00:10:28.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000777172 s, 5.3 MB/s 00:10:28.897 19:24:54 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.897 19:24:54 -- common/autotest_common.sh@872 -- # size=4096 00:10:28.897 19:24:54 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:28.897 19:24:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:28.897 19:24:54 -- common/autotest_common.sh@875 -- # return 0 00:10:28.897 19:24:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:28.897 19:24:54 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:28.897 19:24:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:28.897 /dev/nbd11 00:10:29.156 19:24:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:29.156 19:24:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:29.156 19:24:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:10:29.156 19:24:54 -- common/autotest_common.sh@855 -- # local i 00:10:29.156 19:24:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:29.156 19:24:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:29.156 19:24:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:10:29.156 19:24:54 -- common/autotest_common.sh@859 -- # break 00:10:29.156 19:24:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:29.156 19:24:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:29.156 19:24:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:29.156 1+0 records in 00:10:29.156 1+0 records out 00:10:29.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000724703 s, 5.7 MB/s 00:10:29.156 19:24:54 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.156 19:24:54 -- common/autotest_common.sh@872 -- # size=4096 00:10:29.156 19:24:54 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.156 19:24:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:29.156 19:24:54 -- common/autotest_common.sh@875 -- # return 0 00:10:29.156 19:24:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:29.156 19:24:54 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:29.156 19:24:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:29.416 /dev/nbd12 00:10:29.416 19:24:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:29.416 19:24:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:29.416 19:24:54 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:10:29.416 19:24:54 -- common/autotest_common.sh@855 -- # local i 00:10:29.416 19:24:54 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:29.416 19:24:54 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:29.416 19:24:54 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:10:29.416 19:24:54 -- common/autotest_common.sh@859 -- # break 00:10:29.416 19:24:54 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:29.416 19:24:54 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:29.416 19:24:54 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:29.416 1+0 records in 00:10:29.416 1+0 records out 00:10:29.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000705237 s, 5.8 MB/s 00:10:29.416 19:24:54 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.416 19:24:54 -- common/autotest_common.sh@872 -- # size=4096 00:10:29.416 19:24:54 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.416 19:24:54 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:29.416 19:24:54 -- common/autotest_common.sh@875 -- # return 0 00:10:29.416 19:24:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:29.416 19:24:54 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:29.416 19:24:54 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:29.675 /dev/nbd13 00:10:29.675 19:24:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:29.675 19:24:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:29.675 19:24:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:10:29.675 19:24:55 -- common/autotest_common.sh@855 -- # local i 00:10:29.675 19:24:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:29.675 19:24:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:29.675 19:24:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:10:29.675 19:24:55 -- common/autotest_common.sh@859 -- # break 00:10:29.675 19:24:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:29.675 19:24:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:29.675 19:24:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:29.675 1+0 records in 00:10:29.675 1+0 records out 00:10:29.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734668 s, 5.6 MB/s 00:10:29.675 19:24:55 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.675 19:24:55 -- common/autotest_common.sh@872 -- # size=4096 00:10:29.675 19:24:55 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.675 19:24:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:29.675 19:24:55 -- common/autotest_common.sh@875 -- # return 0 00:10:29.675 19:24:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:29.675 19:24:55 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:29.675 19:24:55 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:29.934 /dev/nbd14 00:10:29.934 19:24:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:29.934 19:24:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:29.934 19:24:55 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:10:29.934 19:24:55 -- common/autotest_common.sh@855 -- # local i 00:10:29.934 19:24:55 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:10:29.934 19:24:55 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:10:29.934 19:24:55 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:10:29.934 19:24:55 -- common/autotest_common.sh@859 -- # break 00:10:29.934 19:24:55 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:10:29.934 19:24:55 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:10:29.934 19:24:55 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:29.934 1+0 records in 00:10:29.934 1+0 records out 00:10:29.934 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000907622 s, 4.5 MB/s 00:10:29.934 19:24:55 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.934 19:24:55 -- common/autotest_common.sh@872 -- # size=4096 00:10:29.934 19:24:55 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:29.934 19:24:55 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:10:29.934 19:24:55 -- common/autotest_common.sh@875 -- # return 0 00:10:29.934 19:24:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:29.934 19:24:55 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:29.934 19:24:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:29.934 19:24:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:29.934 19:24:55 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:30.191 19:24:55 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:30.191 { 00:10:30.191 "nbd_device": "/dev/nbd0", 00:10:30.191 "bdev_name": "Nvme0n1p1" 00:10:30.191 }, 00:10:30.191 { 00:10:30.191 "nbd_device": "/dev/nbd1", 00:10:30.191 "bdev_name": "Nvme0n1p2" 00:10:30.191 }, 00:10:30.191 { 00:10:30.191 "nbd_device": "/dev/nbd10", 00:10:30.191 "bdev_name": "Nvme1n1" 00:10:30.191 }, 00:10:30.191 { 00:10:30.191 "nbd_device": "/dev/nbd11", 00:10:30.191 "bdev_name": "Nvme2n1" 00:10:30.191 }, 00:10:30.191 { 00:10:30.191 "nbd_device": "/dev/nbd12", 00:10:30.191 "bdev_name": "Nvme2n2" 00:10:30.191 }, 00:10:30.191 { 00:10:30.191 "nbd_device": "/dev/nbd13", 00:10:30.191 "bdev_name": "Nvme2n3" 00:10:30.191 }, 00:10:30.191 { 00:10:30.191 "nbd_device": "/dev/nbd14", 00:10:30.191 "bdev_name": "Nvme3n1" 00:10:30.191 } 00:10:30.191 ]' 00:10:30.191 19:24:55 -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:30.191 { 00:10:30.191 "nbd_device": "/dev/nbd0", 00:10:30.191 "bdev_name": "Nvme0n1p1" 00:10:30.191 }, 00:10:30.191 { 00:10:30.191 "nbd_device": "/dev/nbd1", 00:10:30.191 "bdev_name": "Nvme0n1p2" 00:10:30.191 }, 00:10:30.191 { 00:10:30.191 "nbd_device": "/dev/nbd10", 00:10:30.191 "bdev_name": "Nvme1n1" 00:10:30.191 }, 00:10:30.191 { 00:10:30.191 "nbd_device": "/dev/nbd11", 00:10:30.191 "bdev_name": "Nvme2n1" 00:10:30.191 }, 00:10:30.191 { 00:10:30.191 "nbd_device": "/dev/nbd12", 00:10:30.191 "bdev_name": "Nvme2n2" 00:10:30.191 }, 00:10:30.191 { 00:10:30.191 "nbd_device": "/dev/nbd13", 00:10:30.191 "bdev_name": "Nvme2n3" 00:10:30.191 }, 00:10:30.191 { 00:10:30.191 "nbd_device": "/dev/nbd14", 00:10:30.191 "bdev_name": "Nvme3n1" 00:10:30.191 } 00:10:30.191 ]' 00:10:30.191 19:24:55 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:30.191 19:24:55 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:30.191 /dev/nbd1 00:10:30.192 /dev/nbd10 00:10:30.192 /dev/nbd11 00:10:30.192 /dev/nbd12 00:10:30.192 /dev/nbd13 00:10:30.192 /dev/nbd14' 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:30.192 /dev/nbd1 00:10:30.192 /dev/nbd10 00:10:30.192 /dev/nbd11 00:10:30.192 /dev/nbd12 00:10:30.192 /dev/nbd13 00:10:30.192 /dev/nbd14' 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@65 -- # count=7 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@66 -- # echo 7 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@95 -- # count=7 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:30.192 256+0 records in 00:10:30.192 256+0 records out 00:10:30.192 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00680467 s, 154 MB/s 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:30.192 19:24:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:30.450 256+0 records in 00:10:30.450 256+0 records out 00:10:30.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.106301 s, 9.9 MB/s 00:10:30.450 19:24:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:30.450 19:24:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:30.450 256+0 records in 00:10:30.450 256+0 records out 00:10:30.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0993809 s, 10.6 MB/s 00:10:30.450 19:24:55 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:30.450 19:24:55 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:30.450 256+0 records in 00:10:30.450 256+0 records out 00:10:30.450 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0921221 s, 11.4 MB/s 00:10:30.450 19:24:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:30.450 19:24:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:30.710 256+0 records in 00:10:30.710 256+0 records out 00:10:30.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.108503 s, 9.7 MB/s 00:10:30.710 19:24:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:30.710 19:24:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:30.710 256+0 records in 00:10:30.710 256+0 records out 00:10:30.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.092448 s, 11.3 MB/s 00:10:30.710 19:24:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:30.710 19:24:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:30.710 256+0 records in 00:10:30.710 256+0 records out 00:10:30.710 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0925603 s, 11.3 MB/s 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:30.970 256+0 records in 00:10:30.970 256+0 records out 00:10:30.970 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0927742 s, 11.3 MB/s 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@51 -- # local i 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:30.970 19:24:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:31.229 19:24:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:31.229 19:24:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:31.229 19:24:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:31.229 19:24:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.229 19:24:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.229 19:24:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:31.229 19:24:56 -- bdev/nbd_common.sh@41 -- # break 00:10:31.229 19:24:56 -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.229 19:24:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.229 19:24:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:31.489 19:24:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:31.489 19:24:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:31.489 19:24:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:31.489 19:24:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.489 19:24:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.489 19:24:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:31.489 19:24:57 -- bdev/nbd_common.sh@41 -- # break 00:10:31.489 19:24:57 -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.489 19:24:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.489 19:24:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:31.750 19:24:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:31.750 19:24:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:31.750 19:24:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:31.750 19:24:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.750 19:24:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.750 19:24:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:31.750 19:24:57 -- bdev/nbd_common.sh@41 -- # break 00:10:31.750 19:24:57 -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.750 19:24:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.750 19:24:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@41 -- # break 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@41 -- # break 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.016 19:24:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:32.586 19:24:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:32.586 19:24:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:32.586 19:24:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:32.586 19:24:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.586 19:24:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.586 19:24:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:32.586 19:24:57 -- bdev/nbd_common.sh@41 -- # break 00:10:32.586 19:24:57 -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.586 19:24:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:32.586 19:24:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:32.586 19:24:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:32.586 19:24:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:32.586 19:24:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:32.586 19:24:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:32.586 19:24:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:32.586 19:24:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:32.844 19:24:58 -- bdev/nbd_common.sh@41 -- # break 00:10:32.844 19:24:58 -- bdev/nbd_common.sh@45 -- # return 0 00:10:32.844 19:24:58 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:32.844 19:24:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.844 19:24:58 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:32.844 19:24:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:32.844 19:24:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:32.844 19:24:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:33.103 19:24:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:33.103 19:24:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:10:33.103 19:24:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:33.103 19:24:58 -- bdev/nbd_common.sh@65 -- # true 00:10:33.103 19:24:58 -- bdev/nbd_common.sh@65 -- # count=0 00:10:33.103 19:24:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:10:33.103 19:24:58 -- bdev/nbd_common.sh@104 -- # count=0 00:10:33.103 19:24:58 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:33.103 19:24:58 -- bdev/nbd_common.sh@109 -- # return 0 00:10:33.103 19:24:58 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:33.103 19:24:58 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.103 19:24:58 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:33.103 19:24:58 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:10:33.103 19:24:58 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:10:33.103 19:24:58 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:33.103 malloc_lvol_verify 00:10:33.362 19:24:58 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:33.362 202d2d7f-d5cd-48d2-a99f-fdb3bfdf9c5e 00:10:33.362 19:24:58 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:33.621 10422660-732a-482e-8347-aa4f25792f2d 00:10:33.621 19:24:59 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:33.879 /dev/nbd0 00:10:33.879 19:24:59 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:10:33.879 mke2fs 1.46.5 (30-Dec-2021) 00:10:33.879 Discarding device blocks: 0/4096 done 00:10:33.879 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:33.879 00:10:33.879 Allocating group tables: 0/1 done 00:10:33.879 Writing inode tables: 0/1 done 00:10:33.879 Creating journal (1024 blocks): done 00:10:33.879 Writing superblocks and filesystem accounting information: 0/1 done 00:10:33.879 00:10:33.879 19:24:59 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:10:33.879 19:24:59 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:33.879 19:24:59 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.879 19:24:59 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:33.879 19:24:59 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:33.879 19:24:59 -- bdev/nbd_common.sh@51 -- # local i 00:10:33.879 19:24:59 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:33.879 19:24:59 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:34.138 19:24:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:34.138 19:24:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:34.138 19:24:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:34.138 19:24:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:34.138 19:24:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:34.138 19:24:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:34.138 19:24:59 -- bdev/nbd_common.sh@41 -- # break 00:10:34.138 19:24:59 -- bdev/nbd_common.sh@45 -- # return 0 00:10:34.138 19:24:59 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:10:34.138 19:24:59 -- bdev/nbd_common.sh@147 -- # return 0 00:10:34.138 19:24:59 -- bdev/blockdev.sh@326 -- # killprocess 68386 00:10:34.138 19:24:59 -- common/autotest_common.sh@936 -- # '[' -z 68386 ']' 00:10:34.138 19:24:59 -- common/autotest_common.sh@940 -- # kill -0 68386 00:10:34.138 19:24:59 -- common/autotest_common.sh@941 -- # uname 00:10:34.138 19:24:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:34.138 19:24:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68386 00:10:34.138 killing process with pid 68386 00:10:34.138 19:24:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:34.138 19:24:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:34.138 19:24:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68386' 00:10:34.138 19:24:59 -- common/autotest_common.sh@955 -- # kill 68386 00:10:34.138 19:24:59 -- common/autotest_common.sh@960 -- # wait 68386 00:10:36.060 ************************************ 00:10:36.060 END TEST bdev_nbd 00:10:36.060 ************************************ 00:10:36.060 19:25:01 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:10:36.060 00:10:36.060 real 0m13.453s 00:10:36.060 user 0m18.505s 00:10:36.060 sys 0m4.593s 00:10:36.060 19:25:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:36.060 19:25:01 -- common/autotest_common.sh@10 -- # set +x 00:10:36.060 19:25:01 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:10:36.060 19:25:01 -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:10:36.060 skipping fio tests on NVMe due to multi-ns failures. 00:10:36.060 19:25:01 -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:10:36.060 19:25:01 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:36.060 19:25:01 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:36.060 19:25:01 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:36.060 19:25:01 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:10:36.060 19:25:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:36.060 19:25:01 -- common/autotest_common.sh@10 -- # set +x 00:10:36.060 ************************************ 00:10:36.060 START TEST bdev_verify 00:10:36.060 ************************************ 00:10:36.060 19:25:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:36.060 [2024-04-24 19:25:01.495973] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:10:36.060 [2024-04-24 19:25:01.496101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68830 ] 00:10:36.060 [2024-04-24 19:25:01.665799] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:36.319 [2024-04-24 19:25:01.927132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.319 [2024-04-24 19:25:01.927151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.259 Running I/O for 5 seconds... 00:10:42.544 00:10:42.544 Latency(us) 00:10:42.544 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:42.544 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:42.544 Verification LBA range: start 0x0 length 0x5e800 00:10:42.544 Nvme0n1p1 : 5.07 1286.65 5.03 0.00 0.00 99222.58 17171.00 95699.73 00:10:42.544 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:42.544 Verification LBA range: start 0x5e800 length 0x5e800 00:10:42.544 Nvme0n1p1 : 5.07 1312.23 5.13 0.00 0.00 97337.92 19231.52 88831.33 00:10:42.544 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:42.544 Verification LBA range: start 0x0 length 0x5e7ff 00:10:42.544 Nvme0n1p2 : 5.08 1285.87 5.02 0.00 0.00 99057.88 18430.21 86083.97 00:10:42.544 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:42.544 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:10:42.544 Nvme0n1p2 : 5.07 1311.80 5.12 0.00 0.00 97216.11 19346.00 83794.50 00:10:42.544 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:42.544 Verification LBA range: start 0x0 length 0xa0000 00:10:42.544 Nvme1n1 : 5.08 1285.01 5.02 0.00 0.00 98909.54 19918.37 82878.71 00:10:42.544 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:42.544 Verification LBA range: start 0xa0000 length 0xa0000 00:10:42.545 Nvme1n1 : 5.08 1311.05 5.12 0.00 0.00 97068.76 20147.31 76010.31 00:10:42.545 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:42.545 Verification LBA range: start 0x0 length 0x80000 00:10:42.545 Nvme2n1 : 5.08 1284.11 5.02 0.00 0.00 98765.17 21063.10 82878.71 00:10:42.545 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:42.545 Verification LBA range: start 0x80000 length 0x80000 00:10:42.545 Nvme2n1 : 5.08 1310.17 5.12 0.00 0.00 96951.90 20719.68 72805.06 00:10:42.545 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:42.545 Verification LBA range: start 0x0 length 0x80000 00:10:42.545 Nvme2n2 : 5.09 1283.09 5.01 0.00 0.00 98641.70 22322.31 85626.08 00:10:42.545 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:42.545 Verification LBA range: start 0x80000 length 0x80000 00:10:42.545 Nvme2n2 : 5.08 1309.28 5.11 0.00 0.00 96826.60 22322.31 72805.06 00:10:42.545 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:42.545 Verification LBA range: start 0x0 length 0x80000 00:10:42.545 Nvme2n3 : 5.09 1282.15 5.01 0.00 0.00 98475.93 21177.57 86541.86 00:10:42.545 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:42.545 Verification LBA range: start 0x80000 length 0x80000 00:10:42.545 Nvme2n3 : 5.09 1308.26 5.11 0.00 0.00 96705.72 22207.83 74178.74 00:10:42.545 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:42.545 Verification LBA range: start 0x0 length 0x20000 00:10:42.545 Nvme3n1 : 5.09 1281.30 5.01 0.00 0.00 98372.10 19574.94 87915.54 00:10:42.545 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:42.545 Verification LBA range: start 0x20000 length 0x20000 00:10:42.545 Nvme3n1 : 5.09 1307.31 5.11 0.00 0.00 96605.52 20948.63 75552.42 00:10:42.545 =================================================================================================================== 00:10:42.545 Total : 18158.29 70.93 0.00 0.00 97859.56 17171.00 95699.73 00:10:44.502 00:10:44.502 real 0m8.423s 00:10:44.502 user 0m15.287s 00:10:44.502 sys 0m0.312s 00:10:44.502 19:25:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:44.502 19:25:09 -- common/autotest_common.sh@10 -- # set +x 00:10:44.502 ************************************ 00:10:44.502 END TEST bdev_verify 00:10:44.502 ************************************ 00:10:44.502 19:25:09 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:44.502 19:25:09 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:10:44.502 19:25:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:44.502 19:25:09 -- common/autotest_common.sh@10 -- # set +x 00:10:44.502 ************************************ 00:10:44.502 START TEST bdev_verify_big_io 00:10:44.502 ************************************ 00:10:44.502 19:25:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:44.502 [2024-04-24 19:25:10.045936] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:10:44.502 [2024-04-24 19:25:10.046063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68944 ] 00:10:44.763 [2024-04-24 19:25:10.215756] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:45.023 [2024-04-24 19:25:10.486627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.023 [2024-04-24 19:25:10.486702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.965 Running I/O for 5 seconds... 00:10:52.591 00:10:52.591 Latency(us) 00:10:52.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.591 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:52.591 Verification LBA range: start 0x0 length 0x5e80 00:10:52.591 Nvme0n1p1 : 5.90 110.08 6.88 0.00 0.00 1111443.04 18201.26 1501890.52 00:10:52.591 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:52.591 Verification LBA range: start 0x5e80 length 0x5e80 00:10:52.591 Nvme0n1p1 : 5.81 115.42 7.21 0.00 0.00 1068903.30 51741.96 1076965.39 00:10:52.591 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:52.591 Verification LBA range: start 0x0 length 0x5e7f 00:10:52.591 Nvme0n1p2 : 5.90 110.86 6.93 0.00 0.00 1063121.03 19803.89 1509216.81 00:10:52.591 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:52.591 Verification LBA range: start 0x5e7f length 0x5e7f 00:10:52.591 Nvme0n1p2 : 5.81 113.34 7.08 0.00 0.00 1066374.09 81962.93 1062312.80 00:10:52.591 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:52.591 Verification LBA range: start 0x0 length 0xa000 00:10:52.591 Nvme1n1 : 5.95 116.21 7.26 0.00 0.00 993238.81 36173.58 1538521.99 00:10:52.591 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:52.591 Verification LBA range: start 0xa000 length 0xa000 00:10:52.591 Nvme1n1 : 5.82 91.98 5.75 0.00 0.00 1291193.63 153852.20 2139278.20 00:10:52.591 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:52.591 Verification LBA range: start 0x0 length 0x8000 00:10:52.591 Nvme2n1 : 5.91 115.92 7.24 0.00 0.00 970954.87 53115.64 1560500.88 00:10:52.591 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:52.591 Verification LBA range: start 0x8000 length 0x8000 00:10:52.591 Nvme2n1 : 5.78 114.07 7.13 0.00 0.00 1014245.46 81962.93 1076965.39 00:10:52.591 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:52.591 Verification LBA range: start 0x0 length 0x8000 00:10:52.591 Nvme2n2 : 5.95 120.36 7.52 0.00 0.00 911750.59 33884.12 1582479.76 00:10:52.591 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:52.591 Verification LBA range: start 0x8000 length 0x8000 00:10:52.591 Nvme2n2 : 5.82 120.98 7.56 0.00 0.00 936157.63 38005.16 1003702.44 00:10:52.591 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:52.591 Verification LBA range: start 0x0 length 0x8000 00:10:52.591 Nvme2n3 : 6.00 131.33 8.21 0.00 0.00 813774.72 18086.79 1604458.65 00:10:52.591 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:52.591 Verification LBA range: start 0x8000 length 0x8000 00:10:52.591 Nvme2n3 : 5.84 125.91 7.87 0.00 0.00 878646.21 15339.43 1120923.17 00:10:52.591 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:52.591 Verification LBA range: start 0x0 length 0x2000 00:10:52.591 Nvme3n1 : 6.04 155.89 9.74 0.00 0.00 680331.70 922.94 1633763.83 00:10:52.591 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:52.591 Verification LBA range: start 0x2000 length 0x2000 00:10:52.591 Nvme3n1 : 5.92 147.18 9.20 0.00 0.00 733324.15 3977.95 1040333.92 00:10:52.591 =================================================================================================================== 00:10:52.591 Total : 1689.52 105.60 0.00 0.00 946430.70 922.94 2139278.20 00:10:55.121 00:10:55.121 real 0m10.250s 00:10:55.121 user 0m18.867s 00:10:55.121 sys 0m0.346s 00:10:55.121 ************************************ 00:10:55.121 END TEST bdev_verify_big_io 00:10:55.121 ************************************ 00:10:55.121 19:25:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:55.121 19:25:20 -- common/autotest_common.sh@10 -- # set +x 00:10:55.121 19:25:20 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:55.121 19:25:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:55.121 19:25:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:55.121 19:25:20 -- common/autotest_common.sh@10 -- # set +x 00:10:55.121 ************************************ 00:10:55.121 START TEST bdev_write_zeroes 00:10:55.121 ************************************ 00:10:55.121 19:25:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:55.121 [2024-04-24 19:25:20.426751] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:10:55.121 [2024-04-24 19:25:20.426931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69079 ] 00:10:55.121 [2024-04-24 19:25:20.592045] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.380 [2024-04-24 19:25:20.868906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.313 Running I/O for 1 seconds... 00:10:57.247 00:10:57.247 Latency(us) 00:10:57.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.247 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:57.247 Nvme0n1p1 : 1.02 7112.75 27.78 0.00 0.00 17922.57 12706.54 33197.28 00:10:57.247 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:57.247 Nvme0n1p2 : 1.02 7103.40 27.75 0.00 0.00 17914.61 12649.31 34113.06 00:10:57.247 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:57.247 Nvme1n1 : 1.02 7095.81 27.72 0.00 0.00 17875.16 12878.25 33197.28 00:10:57.247 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:57.247 Nvme2n1 : 1.03 7172.27 28.02 0.00 0.00 17581.37 5294.39 33426.22 00:10:57.247 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:57.247 Nvme2n2 : 1.03 7164.49 27.99 0.00 0.00 17550.02 5523.34 33426.22 00:10:57.247 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:57.247 Nvme2n3 : 1.03 7156.64 27.96 0.00 0.00 17537.43 5666.43 32968.33 00:10:57.247 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:57.247 Nvme3n1 : 1.03 7148.25 27.92 0.00 0.00 17510.81 6152.94 32739.38 00:10:57.247 =================================================================================================================== 00:10:57.247 Total : 49953.61 195.13 0.00 0.00 17697.31 5294.39 34113.06 00:10:58.738 00:10:58.738 real 0m3.906s 00:10:58.738 user 0m3.522s 00:10:58.738 sys 0m0.262s 00:10:58.738 ************************************ 00:10:58.738 END TEST bdev_write_zeroes 00:10:58.738 ************************************ 00:10:58.738 19:25:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:58.738 19:25:24 -- common/autotest_common.sh@10 -- # set +x 00:10:58.738 19:25:24 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:58.738 19:25:24 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:58.738 19:25:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:58.738 19:25:24 -- common/autotest_common.sh@10 -- # set +x 00:10:58.738 ************************************ 00:10:58.738 START TEST bdev_json_nonenclosed 00:10:58.738 ************************************ 00:10:58.738 19:25:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:58.998 [2024-04-24 19:25:24.465555] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:10:58.998 [2024-04-24 19:25:24.465687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69142 ] 00:10:58.998 [2024-04-24 19:25:24.635222] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.257 [2024-04-24 19:25:24.934063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.257 [2024-04-24 19:25:24.934165] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:59.257 [2024-04-24 19:25:24.934187] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:59.257 [2024-04-24 19:25:24.934199] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:59.890 00:10:59.890 real 0m1.081s 00:10:59.890 user 0m0.835s 00:10:59.890 sys 0m0.138s 00:10:59.890 19:25:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:59.890 19:25:25 -- common/autotest_common.sh@10 -- # set +x 00:10:59.890 ************************************ 00:10:59.890 END TEST bdev_json_nonenclosed 00:10:59.890 ************************************ 00:10:59.890 19:25:25 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:59.890 19:25:25 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:59.890 19:25:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:59.890 19:25:25 -- common/autotest_common.sh@10 -- # set +x 00:11:00.149 ************************************ 00:11:00.149 START TEST bdev_json_nonarray 00:11:00.149 ************************************ 00:11:00.149 19:25:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:00.149 [2024-04-24 19:25:25.700712] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:11:00.149 [2024-04-24 19:25:25.700877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69184 ] 00:11:00.407 [2024-04-24 19:25:25.873618] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.665 [2024-04-24 19:25:26.151852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.665 [2024-04-24 19:25:26.151968] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:00.665 [2024-04-24 19:25:26.151999] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:00.665 [2024-04-24 19:25:26.152011] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:01.233 00:11:01.233 real 0m1.067s 00:11:01.233 user 0m0.813s 00:11:01.233 sys 0m0.145s 00:11:01.233 ************************************ 00:11:01.233 END TEST bdev_json_nonarray 00:11:01.233 ************************************ 00:11:01.233 19:25:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:01.233 19:25:26 -- common/autotest_common.sh@10 -- # set +x 00:11:01.233 19:25:26 -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:11:01.233 19:25:26 -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:11:01.233 19:25:26 -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:01.233 19:25:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:01.233 19:25:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:01.233 19:25:26 -- common/autotest_common.sh@10 -- # set +x 00:11:01.233 ************************************ 00:11:01.233 START TEST bdev_gpt_uuid 00:11:01.233 ************************************ 00:11:01.233 19:25:26 -- common/autotest_common.sh@1111 -- # bdev_gpt_uuid 00:11:01.233 19:25:26 -- bdev/blockdev.sh@614 -- # local bdev 00:11:01.233 19:25:26 -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:11:01.233 19:25:26 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69221 00:11:01.233 19:25:26 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:01.233 19:25:26 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:01.233 19:25:26 -- bdev/blockdev.sh@49 -- # waitforlisten 69221 00:11:01.233 19:25:26 -- common/autotest_common.sh@817 -- # '[' -z 69221 ']' 00:11:01.233 19:25:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.233 19:25:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:01.233 19:25:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.233 19:25:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:01.233 19:25:26 -- common/autotest_common.sh@10 -- # set +x 00:11:01.493 [2024-04-24 19:25:26.919862] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:11:01.493 [2024-04-24 19:25:26.920113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69221 ] 00:11:01.493 [2024-04-24 19:25:27.090206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.752 [2024-04-24 19:25:27.410572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.128 19:25:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:03.128 19:25:28 -- common/autotest_common.sh@850 -- # return 0 00:11:03.128 19:25:28 -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:03.128 19:25:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.128 19:25:28 -- common/autotest_common.sh@10 -- # set +x 00:11:03.388 Some configs were skipped because the RPC state that can call them passed over. 00:11:03.388 19:25:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.388 19:25:29 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:11:03.388 19:25:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.388 19:25:29 -- common/autotest_common.sh@10 -- # set +x 00:11:03.388 19:25:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.388 19:25:29 -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:03.388 19:25:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.388 19:25:29 -- common/autotest_common.sh@10 -- # set +x 00:11:03.388 19:25:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.388 19:25:29 -- bdev/blockdev.sh@621 -- # bdev='[ 00:11:03.388 { 00:11:03.388 "name": "Nvme0n1p1", 00:11:03.388 "aliases": [ 00:11:03.388 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:03.388 ], 00:11:03.388 "product_name": "GPT Disk", 00:11:03.388 "block_size": 4096, 00:11:03.388 "num_blocks": 774144, 00:11:03.388 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:03.388 "md_size": 64, 00:11:03.388 "md_interleave": false, 00:11:03.388 "dif_type": 0, 00:11:03.388 "assigned_rate_limits": { 00:11:03.388 "rw_ios_per_sec": 0, 00:11:03.388 "rw_mbytes_per_sec": 0, 00:11:03.388 "r_mbytes_per_sec": 0, 00:11:03.388 "w_mbytes_per_sec": 0 00:11:03.388 }, 00:11:03.388 "claimed": false, 00:11:03.388 "zoned": false, 00:11:03.388 "supported_io_types": { 00:11:03.388 "read": true, 00:11:03.388 "write": true, 00:11:03.388 "unmap": true, 00:11:03.388 "write_zeroes": true, 00:11:03.388 "flush": true, 00:11:03.388 "reset": true, 00:11:03.388 "compare": true, 00:11:03.388 "compare_and_write": false, 00:11:03.388 "abort": true, 00:11:03.388 "nvme_admin": false, 00:11:03.388 "nvme_io": false 00:11:03.388 }, 00:11:03.388 "driver_specific": { 00:11:03.388 "gpt": { 00:11:03.388 "base_bdev": "Nvme0n1", 00:11:03.388 "offset_blocks": 256, 00:11:03.388 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:03.388 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:03.388 "partition_name": "SPDK_TEST_first" 00:11:03.388 } 00:11:03.388 } 00:11:03.388 } 00:11:03.388 ]' 00:11:03.388 19:25:29 -- bdev/blockdev.sh@622 -- # jq -r length 00:11:03.646 19:25:29 -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:11:03.646 19:25:29 -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:11:03.646 19:25:29 -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:03.646 19:25:29 -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:03.646 19:25:29 -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:03.646 19:25:29 -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:03.646 19:25:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:03.646 19:25:29 -- common/autotest_common.sh@10 -- # set +x 00:11:03.646 19:25:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:03.646 19:25:29 -- bdev/blockdev.sh@626 -- # bdev='[ 00:11:03.646 { 00:11:03.646 "name": "Nvme0n1p2", 00:11:03.646 "aliases": [ 00:11:03.646 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:03.646 ], 00:11:03.646 "product_name": "GPT Disk", 00:11:03.646 "block_size": 4096, 00:11:03.646 "num_blocks": 774143, 00:11:03.646 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:03.646 "md_size": 64, 00:11:03.646 "md_interleave": false, 00:11:03.646 "dif_type": 0, 00:11:03.646 "assigned_rate_limits": { 00:11:03.646 "rw_ios_per_sec": 0, 00:11:03.646 "rw_mbytes_per_sec": 0, 00:11:03.646 "r_mbytes_per_sec": 0, 00:11:03.646 "w_mbytes_per_sec": 0 00:11:03.646 }, 00:11:03.646 "claimed": false, 00:11:03.646 "zoned": false, 00:11:03.646 "supported_io_types": { 00:11:03.646 "read": true, 00:11:03.646 "write": true, 00:11:03.646 "unmap": true, 00:11:03.646 "write_zeroes": true, 00:11:03.646 "flush": true, 00:11:03.646 "reset": true, 00:11:03.646 "compare": true, 00:11:03.646 "compare_and_write": false, 00:11:03.646 "abort": true, 00:11:03.646 "nvme_admin": false, 00:11:03.646 "nvme_io": false 00:11:03.646 }, 00:11:03.646 "driver_specific": { 00:11:03.646 "gpt": { 00:11:03.646 "base_bdev": "Nvme0n1", 00:11:03.646 "offset_blocks": 774400, 00:11:03.646 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:03.646 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:03.646 "partition_name": "SPDK_TEST_second" 00:11:03.646 } 00:11:03.646 } 00:11:03.646 } 00:11:03.646 ]' 00:11:03.646 19:25:29 -- bdev/blockdev.sh@627 -- # jq -r length 00:11:03.646 19:25:29 -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:11:03.646 19:25:29 -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:11:03.646 19:25:29 -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:03.646 19:25:29 -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:03.904 19:25:29 -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:03.904 19:25:29 -- bdev/blockdev.sh@631 -- # killprocess 69221 00:11:03.904 19:25:29 -- common/autotest_common.sh@936 -- # '[' -z 69221 ']' 00:11:03.904 19:25:29 -- common/autotest_common.sh@940 -- # kill -0 69221 00:11:03.904 19:25:29 -- common/autotest_common.sh@941 -- # uname 00:11:03.904 19:25:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:03.904 19:25:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69221 00:11:03.904 killing process with pid 69221 00:11:03.904 19:25:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:03.904 19:25:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:03.904 19:25:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69221' 00:11:03.904 19:25:29 -- common/autotest_common.sh@955 -- # kill 69221 00:11:03.904 19:25:29 -- common/autotest_common.sh@960 -- # wait 69221 00:11:07.190 ************************************ 00:11:07.190 END TEST bdev_gpt_uuid 00:11:07.190 ************************************ 00:11:07.190 00:11:07.190 real 0m5.395s 00:11:07.190 user 0m5.330s 00:11:07.190 sys 0m0.691s 00:11:07.190 19:25:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:07.190 19:25:32 -- common/autotest_common.sh@10 -- # set +x 00:11:07.190 19:25:32 -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:11:07.190 19:25:32 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:11:07.190 19:25:32 -- bdev/blockdev.sh@811 -- # cleanup 00:11:07.190 19:25:32 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:07.190 19:25:32 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:07.190 19:25:32 -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:11:07.190 19:25:32 -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:11:07.190 19:25:32 -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:11:07.190 19:25:32 -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:07.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:07.448 Waiting for block devices as requested 00:11:07.448 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:07.448 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:07.706 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:07.706 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:12.969 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:12.969 19:25:38 -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:11:12.969 19:25:38 -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:11:12.969 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:12.969 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:11:12.969 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:12.969 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:11:12.969 19:25:38 -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:12.969 00:11:12.969 real 1m12.199s 00:11:12.969 user 1m30.613s 00:11:12.969 sys 0m11.388s 00:11:12.969 19:25:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:12.969 ************************************ 00:11:12.969 END TEST blockdev_nvme_gpt 00:11:12.969 ************************************ 00:11:12.969 19:25:38 -- common/autotest_common.sh@10 -- # set +x 00:11:12.969 19:25:38 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:12.969 19:25:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:12.969 19:25:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:12.969 19:25:38 -- common/autotest_common.sh@10 -- # set +x 00:11:13.243 ************************************ 00:11:13.243 START TEST nvme 00:11:13.243 ************************************ 00:11:13.243 19:25:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:13.243 * Looking for test storage... 00:11:13.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:13.243 19:25:38 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:13.826 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:14.762 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:14.762 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:14.762 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:14.762 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:14.762 19:25:40 -- nvme/nvme.sh@79 -- # uname 00:11:14.762 Waiting for stub to ready for secondary processes... 00:11:14.762 19:25:40 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:14.762 19:25:40 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:14.762 19:25:40 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:14.762 19:25:40 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:14.762 19:25:40 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:11:14.762 19:25:40 -- common/autotest_common.sh@1055 -- # echo 0 00:11:14.762 19:25:40 -- common/autotest_common.sh@1057 -- # stubpid=69879 00:11:14.762 19:25:40 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:11:14.762 19:25:40 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:14.762 19:25:40 -- common/autotest_common.sh@1061 -- # [[ -e /proc/69879 ]] 00:11:14.762 19:25:40 -- common/autotest_common.sh@1062 -- # sleep 1s 00:11:14.762 19:25:40 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:14.762 [2024-04-24 19:25:40.409074] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:11:14.762 [2024-04-24 19:25:40.409202] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:15.698 19:25:41 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:15.698 19:25:41 -- common/autotest_common.sh@1061 -- # [[ -e /proc/69879 ]] 00:11:15.698 19:25:41 -- common/autotest_common.sh@1062 -- # sleep 1s 00:11:15.957 [2024-04-24 19:25:41.409615] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:16.217 [2024-04-24 19:25:41.680073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.217 [2024-04-24 19:25:41.680106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.217 [2024-04-24 19:25:41.680112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.217 [2024-04-24 19:25:41.699211] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:16.217 [2024-04-24 19:25:41.699272] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:16.217 [2024-04-24 19:25:41.713778] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:16.217 [2024-04-24 19:25:41.714307] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:16.217 [2024-04-24 19:25:41.724827] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:16.217 [2024-04-24 19:25:41.725099] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:16.217 [2024-04-24 19:25:41.725164] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:16.217 [2024-04-24 19:25:41.728152] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:16.217 [2024-04-24 19:25:41.728337] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:16.217 [2024-04-24 19:25:41.728399] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:16.217 [2024-04-24 19:25:41.731579] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:16.217 [2024-04-24 19:25:41.731755] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:16.217 [2024-04-24 19:25:41.731811] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:16.217 [2024-04-24 19:25:41.731854] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:16.218 [2024-04-24 19:25:41.731911] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:16.786 done. 00:11:16.786 19:25:42 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:16.786 19:25:42 -- common/autotest_common.sh@1064 -- # echo done. 00:11:16.787 19:25:42 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:16.787 19:25:42 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:11:16.787 19:25:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:16.787 19:25:42 -- common/autotest_common.sh@10 -- # set +x 00:11:16.787 ************************************ 00:11:16.787 START TEST nvme_reset 00:11:16.787 ************************************ 00:11:16.787 19:25:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:17.045 Initializing NVMe Controllers 00:11:17.045 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:17.045 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:17.045 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:17.045 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:17.045 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:17.045 00:11:17.045 real 0m0.266s 00:11:17.045 user 0m0.099s 00:11:17.045 sys 0m0.128s 00:11:17.045 19:25:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:17.045 19:25:42 -- common/autotest_common.sh@10 -- # set +x 00:11:17.045 ************************************ 00:11:17.045 END TEST nvme_reset 00:11:17.045 ************************************ 00:11:17.305 19:25:42 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:17.305 19:25:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:17.305 19:25:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:17.305 19:25:42 -- common/autotest_common.sh@10 -- # set +x 00:11:17.305 ************************************ 00:11:17.305 START TEST nvme_identify 00:11:17.305 ************************************ 00:11:17.305 19:25:42 -- common/autotest_common.sh@1111 -- # nvme_identify 00:11:17.305 19:25:42 -- nvme/nvme.sh@12 -- # bdfs=() 00:11:17.305 19:25:42 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:17.305 19:25:42 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:17.305 19:25:42 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:17.305 19:25:42 -- common/autotest_common.sh@1499 -- # bdfs=() 00:11:17.305 19:25:42 -- common/autotest_common.sh@1499 -- # local bdfs 00:11:17.305 19:25:42 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:17.305 19:25:42 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:17.305 19:25:42 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:11:17.305 19:25:42 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:11:17.305 19:25:42 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:17.305 19:25:42 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:17.566 ===================================================== 00:11:17.566 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:17.566 ===================================================== 00:11:17.566 Controller Capabilities/Features 00:11:17.566 ================================ 00:11:17.566 Vendor ID: 1b36 00:11:17.566 Subsystem Vendor ID: 1af4 00:11:17.566 Serial Number: 12340 00:11:17.566 Model Number: QEMU NVMe Ctrl 00:11:17.566 Firmware Version: 8.0.0 00:11:17.566 Recommended Arb Burst: 6 00:11:17.566 IEEE OUI Identifier: 00 54 52 00:11:17.566 Multi-path I/O 00:11:17.566 May have multiple subsystem ports: No 00:11:17.566 May have multiple controllers: No 00:11:17.566 Associated with SR-IOV VF: No 00:11:17.566 Max Data Transfer Size: 524288 00:11:17.566 Max Number of Namespaces: 256 00:11:17.566 Max Number of I/O Queues: 64 00:11:17.566 NVMe Specification Version (VS): 1.4 00:11:17.566 NVMe Specification Version (Identify): 1.4 00:11:17.566 Maximum Queue Entries: 2048 00:11:17.566 Contiguous Queues Required: Yes 00:11:17.566 Arbitration Mechanisms Supported 00:11:17.566 Weighted Round Robin: Not Supported 00:11:17.566 Vendor Specific: Not Supported 00:11:17.566 Reset Timeout: 7500 ms 00:11:17.566 Doorbell Stride: 4 bytes 00:11:17.566 NVM Subsystem Reset: Not Supported 00:11:17.566 Command Sets Supported 00:11:17.566 NVM Command Set: Supported 00:11:17.566 Boot Partition: Not Supported 00:11:17.566 Memory Page Size Minimum: 4096 bytes 00:11:17.566 Memory Page Size Maximum: 65536 bytes 00:11:17.566 Persistent Memory Region: Not Supported 00:11:17.566 Optional Asynchronous Events Supported 00:11:17.566 Namespace Attribute Notices: Supported 00:11:17.566 Firmware Activation Notices: Not Supported 00:11:17.566 ANA Change Notices: Not Supported 00:11:17.566 PLE Aggregate Log Change Notices: Not Supported 00:11:17.566 LBA Status Info Alert Notices: Not Supported 00:11:17.566 EGE Aggregate Log Change Notices: Not Supported 00:11:17.566 Normal NVM Subsystem Shutdown event: Not Supported 00:11:17.566 Zone Descriptor Change Notices: Not Supported 00:11:17.566 Discovery Log Change Notices: Not Supported 00:11:17.566 Controller Attributes 00:11:17.566 128-bit Host Identifier: Not Supported 00:11:17.566 Non-Operational Permissive Mode: Not Supported 00:11:17.566 NVM Sets: Not Supported 00:11:17.566 Read Recovery Levels: Not Supported 00:11:17.566 Endurance Groups: Not Supported 00:11:17.566 Predictable Latency Mode: Not Supported 00:11:17.566 Traffic Based Keep ALive: Not Supported 00:11:17.566 Namespace Granularity: Not Supported 00:11:17.566 SQ Associations: Not Supported 00:11:17.566 UUID List: Not Supported 00:11:17.566 Multi-Domain Subsystem: Not Supported 00:11:17.566 Fixed Capacity Management: Not Supported 00:11:17.566 Variable Capacity Management: Not Supported 00:11:17.566 Delete Endurance Group: Not Supported 00:11:17.566 Delete NVM Set: Not Supported 00:11:17.566 Extended LBA Formats Supported: Supported 00:11:17.566 Flexible Data Placement Supported: Not Supported 00:11:17.566 00:11:17.566 Controller Memory Buffer Support 00:11:17.566 ================================ 00:11:17.566 Supported: No 00:11:17.566 00:11:17.566 Persistent Memory Region Support 00:11:17.566 ================================ 00:11:17.566 Supported: No 00:11:17.566 00:11:17.566 Admin Command Set Attributes 00:11:17.566 ============================ 00:11:17.566 Security Send/Receive: Not Supported 00:11:17.566 Format NVM: Supported 00:11:17.566 Firmware Activate/Download: Not Supported 00:11:17.566 Namespace Management: Supported 00:11:17.566 Device Self-Test: Not Supported 00:11:17.566 Directives: Supported 00:11:17.566 NVMe-MI: Not Supported 00:11:17.566 Virtualization Management: Not Supported 00:11:17.566 Doorbell Buffer Config: Supported 00:11:17.566 Get LBA Status Capability: Not Supported 00:11:17.566 Command & Feature Lockdown Capability: Not Supported 00:11:17.566 Abort Command Limit: 4 00:11:17.566 Async Event Request Limit: 4 00:11:17.566 Number of Firmware Slots: N/A 00:11:17.566 Firmware Slot 1 Read-Only: N/A 00:11:17.566 Firmware Activation Without Reset: N/A 00:11:17.566 Multiple Update Detection Support: N/A 00:11:17.566 Firmware Update Granularity: No Information Provided 00:11:17.566 Per-Namespace SMART Log: Yes 00:11:17.566 Asymmetric Namespace Access Log Page: Not Supported 00:11:17.566 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:17.566 Command Effects Log Page: Supported 00:11:17.566 Get Log Page Extended Data: Supported 00:11:17.566 Telemetry Log Pages: Not Supported 00:11:17.566 Persistent Event Log Pages: Not Supported 00:11:17.566 Supported Log Pages Log Page: May Support 00:11:17.566 Commands Supported & Effects Log Page: Not Supported 00:11:17.566 Feature Identifiers & Effects Log Page:May Support 00:11:17.566 NVMe-MI Commands & Effects Log Page: May Support 00:11:17.566 Data Area 4 for Telemetry Log: Not Supported 00:11:17.566 Error Log Page Entries Supported: 1 00:11:17.566 Keep Alive: Not Supported 00:11:17.566 00:11:17.566 NVM Command Set Attributes 00:11:17.566 ========================== 00:11:17.566 Submission Queue Entry Size 00:11:17.566 Max: 64 00:11:17.566 Min: 64 00:11:17.566 Completion Queue Entry Size 00:11:17.566 Max: 16 00:11:17.566 Min: 16 00:11:17.566 Number of Namespaces: 256 00:11:17.566 Compare Command: Supported 00:11:17.566 Write Uncorrectable Command: Not Supported 00:11:17.566 Dataset Management Command: Supported 00:11:17.566 Write Zeroes Command: Supported 00:11:17.566 Set Features Save Field: Supported 00:11:17.566 Reservations: Not Supported 00:11:17.566 Timestamp: Supported 00:11:17.566 Copy: Supported 00:11:17.566 Volatile Write Cache: Present 00:11:17.566 Atomic Write Unit (Normal): 1 00:11:17.566 Atomic Write Unit (PFail): 1 00:11:17.566 Atomic Compare & Write Unit: 1 00:11:17.566 Fused Compare & Write: Not Supported 00:11:17.567 Scatter-Gather List 00:11:17.567 SGL Command Set: Supported 00:11:17.567 SGL Keyed: Not Supported 00:11:17.567 SGL Bit Bucket Descriptor: Not Supported 00:11:17.567 SGL Metadata Pointer: Not Supported 00:11:17.567 Oversized SGL: Not Supported 00:11:17.567 SGL Metadata Address: Not Supported 00:11:17.567 SGL Offset: Not Supported 00:11:17.567 Transport SGL Data Block: Not Supported 00:11:17.567 Replay Protected Memory Block: Not Supported 00:11:17.567 00:11:17.567 Firmware Slot Information 00:11:17.567 ========================= 00:11:17.567 Active slot: 1 00:11:17.567 Slot 1 Firmware Revision: 1.0 00:11:17.567 00:11:17.567 00:11:17.567 Commands Supported and Effects 00:11:17.567 ============================== 00:11:17.567 Admin Commands 00:11:17.567 -------------- 00:11:17.567 Delete I/O Submission Queue (00h): Supported 00:11:17.567 Create I/O Submission Queue (01h): Supported 00:11:17.567 Get Log Page (02h): Supported 00:11:17.567 Delete I/O Completion Queue (04h): Supported 00:11:17.567 Create I/O Completion Queue (05h): Supported 00:11:17.567 Identify (06h): Supported 00:11:17.567 Abort (08h): Supported 00:11:17.567 Set Features (09h): Supported 00:11:17.567 Get Features (0Ah): Supported 00:11:17.567 Asynchronous Event Request (0Ch): Supported 00:11:17.567 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:17.567 Directive Send (19h): Supported 00:11:17.567 Directive Receive (1Ah): Supported 00:11:17.567 Virtualization Management (1Ch): Supported 00:11:17.567 Doorbell Buffer Config (7Ch): Supported 00:11:17.567 Format NVM (80h): Supported LBA-Change 00:11:17.567 I/O Commands 00:11:17.567 ------------ 00:11:17.567 Flush (00h): Supported LBA-Change 00:11:17.567 Write (01h): Supported LBA-Change 00:11:17.567 Read (02h): Supported 00:11:17.567 Compare (05h): Supported 00:11:17.567 Write Zeroes (08h): Supported LBA-Change 00:11:17.567 Dataset Management (09h): Supported LBA-Change 00:11:17.567 Unknown (0Ch): Supported 00:11:17.567 Unknown (12h): Supported 00:11:17.567 Copy (19h): Supported LBA-Change 00:11:17.567 Unknown (1Dh): Supported LBA-Change 00:11:17.567 00:11:17.567 Error Log 00:11:17.567 ========= 00:11:17.567 00:11:17.567 Arbitration 00:11:17.567 =========== 00:11:17.567 Arbitration Burst: no limit 00:11:17.567 00:11:17.567 Power Management 00:11:17.567 ================ 00:11:17.567 Number of Power States: 1 00:11:17.567 Current Power State: Power State #0 00:11:17.567 Power State #0: 00:11:17.567 Max Power: 25.00 W 00:11:17.567 Non-Operational State: Operational 00:11:17.567 Entry Latency: 16 microseconds 00:11:17.567 Exit Latency: 4 microseconds 00:11:17.567 Relative Read Throughput: 0 00:11:17.567 Relative Read Latency: 0 00:11:17.567 Relative Write Throughput: 0 00:11:17.567 Relative Write Latency: 0 00:11:17.567 Idle Power: Not Reported 00:11:17.567 Active Power: Not Reported 00:11:17.567 Non-Operational Permissive Mode: Not Supported 00:11:17.567 00:11:17.567 Health Information 00:11:17.567 ================== 00:11:17.567 Critical Warnings: 00:11:17.567 Available Spare Space: OK 00:11:17.567 Temperature: OK 00:11:17.567 Device Reliability: OK 00:11:17.567 Read Only: No 00:11:17.567 Volatile Memory Backup: OK 00:11:17.567 Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.567 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:17.567 Available Spare: 0% 00:11:17.567 Available Spare Threshold: 0% 00:11:17.567 Life Percentage Used: 0% 00:11:17.567 Data Units Read: 1057 00:11:17.567 Data Units Written: 888 00:11:17.567 Host Read Commands: 49758 00:11:17.567 Host Write Commands: 48227 00:11:17.567 Controller Busy Time: 0 minutes 00:11:17.567 Power Cycles: 0 00:11:17.567 Power On Hours: 0 hours 00:11:17.567 Unsafe Shutdowns: 0 00:11:17.567 Unrecoverable Media Errors: 0 00:11:17.567 Lifetime Error Log Entries: 0 00:11:17.567 Warning Temperature Time: 0 minutes 00:11:17.567 Critical Temperature Time: 0 minutes 00:11:17.567 00:11:17.567 Number of Queues 00:11:17.567 ================ 00:11:17.567 Number of I/O Submission Queues: 64 00:11:17.567 Number of I/O Completion Queues: 64 00:11:17.567 00:11:17.567 ZNS Specific Controller Data 00:11:17.567 ============================ 00:11:17.567 Zone Append Size Limit: 0 00:11:17.567 00:11:17.567 00:11:17.567 Active Namespaces 00:11:17.567 ================= 00:11:17.567 Namespace ID:1 00:11:17.567 Error Recovery Timeout: Unlimited 00:11:17.567 Command Set Identifier: NVM (00h) 00:11:17.567 Deallocate: Supported 00:11:17.567 Deallocated/Unwritten Error: Supported 00:11:17.567 Deallocated Read Value: All 0x00 00:11:17.567 Deallocate in Write Zeroes: Not Supported 00:11:17.567 Deallocated Guard Field: 0xFFFF 00:11:17.567 Flush: Supported 00:11:17.567 Reservation: Not Supported 00:11:17.567 Metadata Transferred as: Separate Metadata Buffer 00:11:17.567 Namespace Sharing Capabilities: Private 00:11:17.567 Size (in LBAs): 1548666 (5GiB) 00:11:17.567 Capacity (in LBAs): 1548666 (5GiB) 00:11:17.567 Utilization (in LBAs): 1548666 (5GiB) 00:11:17.567 Thin Provisioning: Not Supported 00:11:17.567 Per-NS Atomic Units: No 00:11:17.567 Maximum Single Source Range Length: 128 00:11:17.567 Maximum Copy Length: 128 00:11:17.567 Maximum Source Range Count: 128 00:11:17.567 NGUID/EUI64 Never Reused: No 00:11:17.567 Namespace Write Protected: No 00:11:17.567 Number of LBA Formats: 8 00:11:17.567 Current LBA Format: LBA Format #07 00:11:17.567 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:17.567 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:17.567 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:17.567 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:17.567 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:17.567 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:17.567 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:17.567 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:17.567 00:11:17.567 ===================================================== 00:11:17.567 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:17.567 ===================================================== 00:11:17.567 Controller Capabilities/Features 00:11:17.567 ================================ 00:11:17.567 Vendor ID: 1b36 00:11:17.567 Subsystem Vendor ID: 1af4 00:11:17.567 Serial Number: 12341 00:11:17.567 Model Number: QEMU NVMe Ctrl 00:11:17.567 Firmware Version: 8.0.0 00:11:17.567 Recommended Arb Burst: 6 00:11:17.567 IEEE OUI Identifier: 00 54 52 00:11:17.567 Multi-path I/O 00:11:17.567 May have multiple subsystem ports: No 00:11:17.567 May have multiple controllers: No 00:11:17.567 Associated with SR-IOV VF: No 00:11:17.567 Max Data Transfer Size: 524288 00:11:17.567 Max Number of Namespaces: 256 00:11:17.567 Max Number of I/O Queues: 64 00:11:17.567 NVMe Specification Version (VS): 1.4 00:11:17.567 NVMe Specification Version (Identify): 1.4 00:11:17.567 Maximum Queue Entries: 2048 00:11:17.567 Contiguous Queues Required: Yes 00:11:17.567 Arbitration Mechanisms Supported 00:11:17.567 Weighted Round Robin: Not Supported 00:11:17.567 Vendor Specific: Not Supported 00:11:17.567 Reset Timeout: 7500 ms 00:11:17.567 Doorbell Stride: 4 bytes 00:11:17.567 NVM Subsystem Reset: Not Supported 00:11:17.567 Command Sets Supported 00:11:17.567 NVM Command Set: Supported 00:11:17.567 Boot Partition: Not Supported 00:11:17.567 Memory Page Size Minimum: 4096 bytes 00:11:17.567 Memory Page Size Maximum: 65536 bytes 00:11:17.567 Persistent Memory Region: Not Supported 00:11:17.567 Optional Asynchronous Events Supported 00:11:17.567 Namespace Attribute Notices: Supported 00:11:17.567 Firmware Activation Notices: Not Supported 00:11:17.567 ANA Change Notices: Not Supported 00:11:17.567 PLE Aggregate Log Change Notices: Not Supported 00:11:17.567 LBA Status Info Alert Notices: Not Supported 00:11:17.567 EGE Aggregate Log Change Notices: Not Supported 00:11:17.567 Normal NVM Subsystem Shutdown event: Not Supported 00:11:17.567 Zone Descriptor Change Notices: Not Supported 00:11:17.567 Discovery Log Change Notices: Not Supported 00:11:17.567 Controller Attributes 00:11:17.567 128-bit Host Identifier: Not Supported 00:11:17.567 Non-Operational Permissive Mode: Not Supported 00:11:17.567 NVM Sets: Not Supported 00:11:17.567 Read Recovery Levels: Not Supported 00:11:17.567 Endurance Groups: Not Supported 00:11:17.567 Predictable Latency Mode: Not Supported 00:11:17.567 Traffic Based Keep ALive: Not Supported 00:11:17.567 Namespace Granularity: Not Supported 00:11:17.567 SQ Associations: Not Supported 00:11:17.567 UUID List: Not Supported 00:11:17.567 Multi-Domain Subsystem: Not Supported 00:11:17.567 Fixed Capacity Management: Not Supported 00:11:17.567 Variable Capacity Management: Not Supported 00:11:17.567 Delete Endurance Group: Not Supported 00:11:17.567 Delete NVM Set: Not Supported 00:11:17.567 Extended LBA Formats Supported: Supported 00:11:17.568 Flexible Data Placement Supported: Not Supported 00:11:17.568 00:11:17.568 Controller Memory Buffer Support 00:11:17.568 ================================ 00:11:17.568 Supported: No 00:11:17.568 00:11:17.568 Persistent Memory Region Support 00:11:17.568 ================================ 00:11:17.568 Supported: No 00:11:17.568 00:11:17.568 Admin Command Set Attributes 00:11:17.568 ============================ 00:11:17.568 Security Send/Receive: Not Supported 00:11:17.568 Format NVM: Supported 00:11:17.568 Firmware Activate/Download: Not Supported 00:11:17.568 Namespace Management: Supported 00:11:17.568 Device Self-Test: Not Supported 00:11:17.568 Directives: Supported 00:11:17.568 NVMe-MI: Not Supported 00:11:17.568 Virtualization Management: Not Supported 00:11:17.568 Doorbell Buffer Config: Supported 00:11:17.568 Get LBA Status Capability: Not Supported 00:11:17.568 Command & Feature Lockdown Capability: Not Supported 00:11:17.568 Abort Command Limit: 4 00:11:17.568 Async Event Request Limit: 4 00:11:17.568 Number of Firmware Slots: N/A 00:11:17.568 Firmware Slot 1 Read-Only: N/A 00:11:17.568 Firmware Activation Without Reset: N/A 00:11:17.568 Multiple Update Detection Support: N/A 00:11:17.568 Firmware Update Granularity: No Information Provided 00:11:17.568 Per-Namespace SMART Log: Yes 00:11:17.568 Asymmetric Namespace Access Log Page: Not Supported 00:11:17.568 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:17.568 Command Effects Log Page: Supported 00:11:17.568 Get Log Page Extended Data: Supported 00:11:17.568 Telemetry Log Pages: Not Supported 00:11:17.568 Persistent Event Log Pages: Not Supported 00:11:17.568 Supported Log Pages Log Page: May Support 00:11:17.568 Commands Supported & Effects Log Page: Not Supported 00:11:17.568 Feature Identifiers & Effects Log Page:May Support 00:11:17.568 NVMe-MI Commands & Effects Log Page: May Support 00:11:17.568 Data Area 4 for Telemetry Log: Not Supported 00:11:17.568 Error Log Page Entries Supported: 1 00:11:17.568 Keep Alive: Not Supported 00:11:17.568 00:11:17.568 NVM Command Set Attributes 00:11:17.568 ========================== 00:11:17.568 Submission Queue Entry Size 00:11:17.568 Max: 64 00:11:17.568 Min: 64 00:11:17.568 Completion Queue Entry Size 00:11:17.568 Max: 16 00:11:17.568 Min: 16 00:11:17.568 Number of Namespaces: 256 00:11:17.568 Compare Command: Supported 00:11:17.568 Write Uncorrectable Command: Not Supported 00:11:17.568 Dataset Management Command: Supported 00:11:17.568 Write Zeroes Command: Supported 00:11:17.568 Set Features Save Field: Supported 00:11:17.568 Reservations: Not Supported 00:11:17.568 Timestamp: Supported 00:11:17.568 Copy: Supported 00:11:17.568 Volatile Write Cache: Present 00:11:17.568 Atomic Write Unit (Normal): 1 00:11:17.568 Atomic Write Unit (PFail): 1 00:11:17.568 Atomic Compare & Write Unit: 1 00:11:17.568 Fused Compare & Write: Not Supported 00:11:17.568 Scatter-Gather List 00:11:17.568 SGL Command Set: Supported 00:11:17.568 SGL Keyed: Not Supported 00:11:17.568 SGL Bit Bucket Descriptor: Not Supported 00:11:17.568 SGL Metadata Pointer: Not Supported 00:11:17.568 Oversized SGL: Not Supported 00:11:17.568 SGL Metadata Address: Not Supported 00:11:17.568 SGL Offset: Not Supported 00:11:17.568 Transport SGL Data Block: Not Supported 00:11:17.568 Replay Protected Memory Block: Not Supported 00:11:17.568 00:11:17.568 Firmware Slot Information 00:11:17.568 ========================= 00:11:17.568 Active slot: 1 00:11:17.568 Slot 1 Firmware Revision: 1.0 00:11:17.568 00:11:17.568 00:11:17.568 Commands Supported and Effects 00:11:17.568 ============================== 00:11:17.568 Admin Commands 00:11:17.568 -------------- 00:11:17.568 Delete I/O Submission Queue (00h): Supported 00:11:17.568 Create I/O Submission Queue (01h): Supported 00:11:17.568 Get Log Page (02h): Supported 00:11:17.568 Delete I/O Completion Queue (04h): Supported 00:11:17.568 Create I/O Completion Queue (05h): Supported 00:11:17.568 Identify (06h): Supported 00:11:17.568 Abort (08h): Supported 00:11:17.568 Set Features (09h): Supported 00:11:17.568 Get Features (0Ah): Supported 00:11:17.568 Asynchronous Event Request (0Ch): Supported 00:11:17.568 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:17.568 Directive Send (19h): Supported 00:11:17.568 Directive Receive (1Ah): Supported 00:11:17.568 Virtualization Management (1Ch): Supported 00:11:17.568 Doorbell Buffer Config (7Ch): Supported 00:11:17.568 Format NVM (80h): Supported LBA-Change 00:11:17.568 I/O Commands 00:11:17.568 ------------ 00:11:17.568 Flush (00h): Supported LBA-Change 00:11:17.568 Write (01h): Supported LBA-Change 00:11:17.568 Read (02h): Supported 00:11:17.568 Compare (05h): Supported 00:11:17.568 Write Zeroes (08h): Supported LBA-Change 00:11:17.568 Dataset Management (09h): Supported LBA-Change 00:11:17.568 Unknown (0Ch): Supported 00:11:17.568 Unknown (12h): Supported 00:11:17.568 Copy (19h): Supported LBA-Change 00:11:17.568 Unknown (1Dh): Supported LBA-Change 00:11:17.568 00:11:17.568 Error Log 00:11:17.568 ========= 00:11:17.568 00:11:17.568 Arbitration 00:11:17.568 =========== 00:11:17.568 Arbitration Burst: no limit 00:11:17.568 00:11:17.568 Power Management 00:11:17.568 ================ 00:11:17.568 Number of Power States: 1 00:11:17.568 Current Power State: Power State #0 00:11:17.568 Power State #0: 00:11:17.568 Max Power: 25.00 W 00:11:17.568 Non-Operational State: Operational 00:11:17.568 Entry Latency: 16 microseconds 00:11:17.568 Exit Latency: 4 microseconds 00:11:17.568 Relative Read Throughput: 0 00:11:17.568 Relative Read Latency: 0 00:11:17.568 Relative Write Throughput: 0 00:11:17.568 Relative Write Latency: 0 00:11:17.568 Idle Power: Not Reported 00:11:17.568 Active Power: Not Reported 00:11:17.568 Non-Operational Permissive Mode: Not Supported 00:11:17.568 00:11:17.568 Health Information 00:11:17.568 ================== 00:11:17.568 Critical Warnings: 00:11:17.568 Available Spare Space: OK 00:11:17.568 Temperature: OK 00:11:17.568 Device Reliability: OK 00:11:17.568 Read Only: No 00:11:17.568 Volatile Memory Backup: OK 00:11:17.568 Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.568 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:17.568 Available Spare: 0% 00:11:17.568 Available Spare Threshold: 0% 00:11:17.568 Life Percentage Used: 0% 00:11:17.568 Data Units Read: 760 00:11:17.568 Data Units Written: 608 00:11:17.568 Host Read Commands: 35453 00:11:17.568 Host Write Commands: 33183 00:11:17.568 Controller Busy Time: 0 minutes 00:11:17.568 Power Cycles: 0 00:11:17.568 Power On Hours: 0 hours 00:11:17.568 Unsafe Shutdowns: 0 00:11:17.568 Unrecoverable Media Errors: 0 00:11:17.568 Lifetime Error Log Entries: 0 00:11:17.568 Warning Temperature Time: 0 minutes 00:11:17.568 Critical Temperature Time: 0 minutes 00:11:17.568 00:11:17.568 Number of Queues 00:11:17.568 ================ 00:11:17.568 Number of I/O Submission Queues: 64 00:11:17.568 Number of I/O Completion Queues: 64 00:11:17.568 00:11:17.568 ZNS Specific Controller Data 00:11:17.568 ============================ 00:11:17.568 Zone Append Size Limit: 0 00:11:17.568 00:11:17.568 00:11:17.568 Active Namespaces 00:11:17.568 ================= 00:11:17.568 Namespace ID:1 00:11:17.568 Error Recovery Timeout: Unlimited 00:11:17.568 Command Set Identifier: NVM (00h) 00:11:17.568 Deallocate: Supported 00:11:17.568 Deallocated/Unwritten Error: Supported 00:11:17.568 Deallocated Read Value: All 0x00 00:11:17.568 Deallocate in Write Zeroes: Not Supported 00:11:17.568 Deallocated Guard Field: 0xFFFF 00:11:17.568 Flush: Supported 00:11:17.568 Reservation: Not Supported 00:11:17.568 Namespace Sharing Capabilities: Private 00:11:17.568 Size (in LBAs): 1310720 (5GiB) 00:11:17.568 Capacity (in LBAs): 1310720 (5GiB) 00:11:17.568 Utilization (in LBAs): 1310720 (5GiB) 00:11:17.568 Thin Provisioning: Not Supported 00:11:17.568 Per-NS Atomic Units: No 00:11:17.568 Maximum Single Source Range Length: 128 00:11:17.568 Maximum Copy Length: 128 00:11:17.568 Maximum Source Range Count: 128 00:11:17.568 NGUID/EUI64 Never Reused: No 00:11:17.568 Namespace Write Protected: No 00:11:17.568 Number of LBA Formats: 8 00:11:17.568 Current LBA Format: LBA Format #04 00:11:17.568 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:17.568 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:17.568 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:17.568 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:17.568 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:17.568 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:17.568 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:17.568 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:17.568 00:11:17.568 ===================================================== 00:11:17.568 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:17.569 ===================================================== 00:11:17.569 Controller Capabilities/Features 00:11:17.569 ================================ 00:11:17.569 Vendor ID: 1b36 00:11:17.569 Subsystem Vendor ID: 1af4 00:11:17.569 Serial Number: 12343 00:11:17.569 Model Number: QEMU NVMe Ctrl 00:11:17.569 Firmware Version: 8.0.0 00:11:17.569 Recommended Arb Burst: 6 00:11:17.569 IEEE OUI Identifier: 00 54 52 00:11:17.569 Multi-path I/O 00:11:17.569 May have multiple subsystem ports: No 00:11:17.569 May have multiple controllers: Yes 00:11:17.569 Associated with SR-IOV VF: No 00:11:17.569 Max Data Transfer Size: 524288 00:11:17.569 Max Number of Namespaces: 256 00:11:17.569 Max Number of I/O Queues: 64 00:11:17.569 NVMe Specification Version (VS): 1.4 00:11:17.569 NVMe Specification Version (Identify): 1.4 00:11:17.569 Maximum Queue Entries: 2048 00:11:17.569 Contiguous Queues Required: Yes 00:11:17.569 Arbitration Mechanisms Supported 00:11:17.569 Weighted Round Robin: Not Supported 00:11:17.569 Vendor Specific: Not Supported 00:11:17.569 Reset Timeout: 7500 ms 00:11:17.569 Doorbell Stride: 4 bytes 00:11:17.569 NVM Subsystem Reset: Not Supported 00:11:17.569 Command Sets Supported 00:11:17.569 NVM Command Set: Supported 00:11:17.569 Boot Partition: Not Supported 00:11:17.569 Memory Page Size Minimum: 4096 bytes 00:11:17.569 Memory Page Size Maximum: 65536 bytes 00:11:17.569 Persistent Memory Region: Not Supported 00:11:17.569 Optional Asynchronous Events Supported 00:11:17.569 Namespace Attribute Notices: Supported 00:11:17.569 Firmware Activation Notices: Not Supported 00:11:17.569 ANA Change Notices: Not Supported 00:11:17.569 PLE Aggregate Log Change Notices: Not Supported 00:11:17.569 LBA Status Info Alert Notices: Not Supported 00:11:17.569 EGE Aggregate Log Change Notices: Not Supported 00:11:17.569 Normal NVM Subsystem Shutdown event: Not Supported 00:11:17.569 Zone Descriptor Change Notices: Not Supported 00:11:17.569 Discovery Log Change Notices: Not Supported 00:11:17.569 Controller Attributes 00:11:17.569 128-bit Host Identifier: Not Supported 00:11:17.569 Non-Operational Permissive Mode: Not Supported 00:11:17.569 NVM Sets: Not Supported 00:11:17.569 Read Recovery Levels: Not Supported 00:11:17.569 Endurance Groups: Supported 00:11:17.569 Predictable Latency Mode: Not Supported 00:11:17.569 Traffic Based Keep ALive: Not Supported 00:11:17.569 Namespace Granularity: Not Supported 00:11:17.569 SQ Associations: Not Supported 00:11:17.569 UUID List: Not Supported 00:11:17.569 Multi-Domain Subsystem: Not Supported 00:11:17.569 Fixed Capacity Management: Not Supported 00:11:17.569 Variable Capacity Management: Not Supported 00:11:17.569 Delete Endurance Group: Not Supported 00:11:17.569 Delete NVM Set: Not Supported 00:11:17.569 Extended LBA Formats Supported: Supported 00:11:17.569 Flexible Data Placement Supported: Supported 00:11:17.569 00:11:17.569 Controller Memory Buffer Support 00:11:17.569 ================================ 00:11:17.569 Supported: No 00:11:17.569 00:11:17.569 Persistent Memory Region Support 00:11:17.569 ================================ 00:11:17.569 Supported: No 00:11:17.569 00:11:17.569 Admin Command Set Attributes 00:11:17.569 ============================ 00:11:17.569 Security Send/Receive: Not Supported 00:11:17.569 Format NVM: Supported 00:11:17.569 Firmware Activate/Download: Not Supported 00:11:17.569 Namespace Management: Supported 00:11:17.569 Device Self-Test: Not Supported 00:11:17.569 Directives: Supported 00:11:17.569 NVMe-MI: Not Supported 00:11:17.569 Virtualization Management: Not Supported 00:11:17.569 Doorbell Buffer Config: Supported 00:11:17.569 Get LBA Status Capability: Not Supported 00:11:17.569 Command & Feature Lockdown Capability: Not Supported 00:11:17.569 Abort Command Limit: 4 00:11:17.569 Async Event Request Limit: 4 00:11:17.569 Number of Firmware Slots: N/A 00:11:17.569 Firmware Slot 1 Read-Only: N/A 00:11:17.569 Firmware Activation Without Reset: N/A 00:11:17.569 Multiple Update Detection Support: N/A 00:11:17.569 Firmware Update Granularity: No Information Provided 00:11:17.569 Per-Namespace SMART Log: Yes 00:11:17.569 Asymmetric Namespace Access Log Page: Not Supported 00:11:17.569 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:17.569 Command Effects Log Page: Supported 00:11:17.569 Get Log Page Extended Data: Supported 00:11:17.569 Telemetry Log Pages: Not Supported 00:11:17.569 Persistent Event Log Pages: Not Supported 00:11:17.569 Supported Log Pages Log Page: May Support 00:11:17.569 Commands Supported & Effects Log Page: Not Supported 00:11:17.569 Feature Identifiers & Effects Log Page:May Support 00:11:17.569 NVMe-MI Commands & Effects Log Page: May Support 00:11:17.569 Data Area 4 for Telemetry Log: Not Supported 00:11:17.569 Error Log Page Entries Supported: 1 00:11:17.569 Keep Alive: Not Supported 00:11:17.569 00:11:17.569 NVM Command Set Attributes 00:11:17.569 ========================== 00:11:17.569 Submission Queue Entry Size 00:11:17.569 Max: 64 00:11:17.569 Min: 64 00:11:17.569 Completion Queue Entry Size 00:11:17.569 Max: 16 00:11:17.569 Min: 16 00:11:17.569 Number of Namespaces: 256 00:11:17.569 Compare Command: Supported 00:11:17.569 Write Uncorrectable Command: Not Supported 00:11:17.569 Dataset Management Command: Supported 00:11:17.569 Write Zeroes Command: Supported 00:11:17.569 Set Features Save Field: Supported 00:11:17.569 Reservations: Not Supported 00:11:17.569 Timestamp: Supported 00:11:17.569 Copy: Supported 00:11:17.569 Volatile Write Cache: Present 00:11:17.569 Atomic Write Unit (Normal): 1 00:11:17.569 Atomic Write Unit (PFail): 1 00:11:17.569 Atomic Compare & Write Unit: 1 00:11:17.569 Fused Compare & Write: Not Supported 00:11:17.569 Scatter-Gather List 00:11:17.569 SGL Command Set: Supported 00:11:17.569 SGL Keyed: Not Supported 00:11:17.569 SGL Bit Bucket Descriptor: Not Supported 00:11:17.569 SGL Metadata Pointer: Not Supported 00:11:17.569 Oversized SGL: Not Supported 00:11:17.569 SGL Metadata Address: Not Supported 00:11:17.569 SGL Offset: Not Supported 00:11:17.569 Transport SGL Data Block: Not Supported 00:11:17.569 Replay Protected Memory Block: Not Supported 00:11:17.569 00:11:17.569 Firmware Slot Information 00:11:17.569 ========================= 00:11:17.569 Active slot: 1 00:11:17.569 Slot 1 Firmware Revision: 1.0 00:11:17.569 00:11:17.569 00:11:17.569 Commands Supported and Effects 00:11:17.569 ============================== 00:11:17.569 Admin Commands 00:11:17.569 -------------- 00:11:17.569 Delete I/O Submission Queue (00h): Supported 00:11:17.569 Create I/O Submission Queue (01h): Supported 00:11:17.569 Get Log Page (02h): Supported 00:11:17.569 Delete I/O Completion Queue (04h): Supported 00:11:17.569 Create I/O Completion Queue (05h): Supported 00:11:17.569 Identify (06h): Supported 00:11:17.569 Abort (08h): Supported 00:11:17.569 Set Features (09h): Supported 00:11:17.569 Get Features (0Ah): Supported 00:11:17.569 Asynchronous Event Request (0Ch): Supported 00:11:17.569 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:17.569 Directive Send (19h): Supported 00:11:17.569 Directive Receive (1Ah): Supported 00:11:17.569 Virtualization Management (1Ch): Supported 00:11:17.569 Doorbell Buffer Config (7Ch): Supported 00:11:17.569 Format NVM (80h): Supported LBA-Change 00:11:17.569 I/O Commands 00:11:17.569 ------------ 00:11:17.569 Flush (00h): Supported LBA-Change 00:11:17.569 Write (01h): Supported LBA-Change 00:11:17.569 Read (02h): Supported 00:11:17.569 Compare (05h): Supported 00:11:17.569 Write Zeroes (08h): Supported LBA-Change 00:11:17.569 Dataset Management (09h): Supported LBA-Change 00:11:17.569 Unknown (0Ch): Supported 00:11:17.569 Unknown (12h): Supported 00:11:17.569 Copy (19h): Supported LBA-Change 00:11:17.569 Unknown (1Dh): Supported LBA-Change 00:11:17.569 00:11:17.569 Error Log 00:11:17.569 ========= 00:11:17.569 00:11:17.569 Arbitration 00:11:17.569 =========== 00:11:17.569 Arbitration Burst: no limit 00:11:17.569 00:11:17.569 Power Management 00:11:17.569 ================ 00:11:17.569 Number of Power States: 1 00:11:17.569 Current Power State: Power State #0 00:11:17.569 Power State #0: 00:11:17.569 Max Power: 25.00 W 00:11:17.569 Non-Operational State: Operational 00:11:17.569 Entry Latency: 16 microseconds 00:11:17.569 Exit Latency: 4 microseconds 00:11:17.569 Relative Read Throughput: 0 00:11:17.569 Relative Read Latency: 0 00:11:17.569 Relative Write Throughput: 0 00:11:17.569 Relative Write Latency: 0 00:11:17.569 Idle Power: Not Reported 00:11:17.569 Active Power: Not Reported 00:11:17.569 Non-Operational Permissive Mode: Not Supported 00:11:17.569 00:11:17.569 Health Information 00:11:17.569 ================== 00:11:17.569 Critical Warnings: 00:11:17.569 Available Spare Space: OK 00:11:17.569 Temperature: OK 00:11:17.569 Device Reliability: OK 00:11:17.570 Read Only: No 00:11:17.570 Volatile Memory Backup: OK 00:11:17.570 Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.570 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:17.570 Available Spare: 0% 00:11:17.570 Available Spare Threshold: 0% 00:11:17.570 Life Percentage Used: 0% 00:11:17.570 Data Units Read: 826 00:11:17.570 Data Units Written: 720 00:11:17.570 Host Read Commands: 35529 00:11:17.570 Host Write Commands: 34119 00:11:17.570 Controller Busy Time: 0 minutes 00:11:17.570 Power Cycles: 0 00:11:17.570 Power On Hours: 0 hours 00:11:17.570 Unsafe Shutdowns: 0 00:11:17.570 Unrecoverable Media Errors: 0 00:11:17.570 Lifetime Error Log Entries: 0 00:11:17.570 Warning Temperature Time: 0 minutes 00:11:17.570 Critical Temperature Time: 0 minutes 00:11:17.570 00:11:17.570 Number of Queues 00:11:17.570 ================ 00:11:17.570 Number of I/O Submission Queues: 64 00:11:17.570 Number of I/O Completion Queues: 64 00:11:17.570 00:11:17.570 ZNS Specific Controller Data 00:11:17.570 ============================ 00:11:17.570 Zone Append Size Limit: 0 00:11:17.570 00:11:17.570 00:11:17.570 Active Namespaces 00:11:17.570 ================= 00:11:17.570 Namespace ID:1 00:11:17.570 Error Recovery Timeout: Unlimited 00:11:17.570 Command Set Identifier: NVM (00h) 00:11:17.570 Deallocate: Supported 00:11:17.570 Deallocated/Unwritten Error: Supported 00:11:17.570 Deallocated Read Value: All 0x00 00:11:17.570 Deallocate in Write Zeroes: Not Supported 00:11:17.570 Deallocated Guard Field: 0xFFFF 00:11:17.570 Flush: Supported 00:11:17.570 Reservation: Not Supported 00:11:17.570 Namespace Sharing Capabilities: Multiple Controllers 00:11:17.570 Size (in LBAs): 262144 (1GiB) 00:11:17.570 Capacity (in LBAs): 262144 (1GiB) 00:11:17.570 Utilization (in LBAs): 262144 (1GiB) 00:11:17.570 Thin Provisioning: Not Supported 00:11:17.570 Per-NS Atomic Units: No 00:11:17.570 Maximum Single Source Range Length: 128 00:11:17.570 Maximum Copy Length: 128 00:11:17.570 Maximum Source Range Count: 128 00:11:17.570 NGUID/EUI64 Never Reused: No 00:11:17.570 Namespace Write Protected: No 00:11:17.570 Endurance group ID: 1 00:11:17.570 Number of LBA Formats: 8 00:11:17.570 Current LBA Format: LBA Format #04 00:11:17.570 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:17.570 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:17.570 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:17.570 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:17.570 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:17.570 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:17.570 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:17.570 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:17.570 00:11:17.570 Get Feature FDP: 00:11:17.570 ================ 00:11:17.570 Enabled: Yes 00:11:17.570 FDP configuration index: 0 00:11:17.570 00:11:17.570 FDP configurations log page 00:11:17.570 =========================== 00:11:17.570 Number of FDP configurations: 1 00:11:17.570 Version: 0 00:11:17.570 Size: 112 00:11:17.570 FDP Configuration Descriptor: 0 00:11:17.570 Descriptor Size: 96 00:11:17.570 Reclaim Group Identifier format: 2 00:11:17.570 FDP Volatile Write Cache: Not Present 00:11:17.570 FDP Configuration: Valid 00:11:17.570 Vendor Specific Size: 0 00:11:17.570 Number of Reclaim Groups: 2 00:11:17.570 Number of Recalim Unit Handles: 8 00:11:17.570 Max Placement Identifiers: 128 00:11:17.570 Number of Namespaces Suppprted: 256 00:11:17.570 Reclaim unit Nominal Size: 6000000 bytes 00:11:17.570 Estimated Reclaim Unit Time Limit: Not Reported 00:11:17.570 RUH Desc #000: RUH Type: Initially Isolated 00:11:17.570 RUH Desc #001: RUH Type: Initially Isolated 00:11:17.570 RUH Desc #002: RUH Type: Initially Isolated 00:11:17.570 RUH Desc #003: RUH Type: Initially Isolated 00:11:17.570 RUH Desc #004: RUH Type: Initially Isolated 00:11:17.570 RUH Desc #005: RUH Type: Initially Isolated 00:11:17.570 RUH Desc #006: RUH Type: Initially Isolated 00:11:17.570 RUH Desc #007: RUH Type: Initially Isolated 00:11:17.570 00:11:17.570 FDP reclaim unit handle usage log page 00:11:17.570 ====================================== 00:11:17.570 Number of Reclaim Unit Handles: 8 00:11:17.570 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:17.570 RUH Usage Desc #001: RUH Attributes: Unused 00:11:17.570 RUH Usage Desc #002: RUH Attributes: Unused 00:11:17.570 RUH Usage Desc #003: RUH Attributes: Unused 00:11:17.570 RUH Usage Desc #004: RUH Attributes: Unused 00:11:17.570 RUH Usage Desc #005: RUH Attributes: Unused 00:11:17.570 RUH Usage Desc #006: RUH Attributes: Unused 00:11:17.570 RUH Usage Desc #007: RUH Attributes: Unused 00:11:17.570 00:11:17.570 FDP statistics log page 00:11:17.570 ======================= 00:11:17.570 Host bytes with metadata written: 451190784 00:11:17.570 Media bytes with metadata written: 451244032 00:11:17.570 Media bytes erased: 0 00:11:17.570 00:11:17.570 FDP events log page 00:11:17.570 =================== 00:11:17.570 Number of FDP events: 0 00:11:17.570 00:11:17.570 ===================================================== 00:11:17.570 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:17.570 ===================================================== 00:11:17.570 Controller Capabilities/Features 00:11:17.570 ================================ 00:11:17.570 Vendor ID: 1b36 00:11:17.570 Subsystem Vendor ID: 1af4 00:11:17.570 Serial Number: 12342 00:11:17.570 Model Number: QEMU NVMe Ctrl 00:11:17.570 Firmware Version: 8.0.0 00:11:17.570 Recommended Arb Burst: 6 00:11:17.570 IEEE OUI Identifier: 00 54 52 00:11:17.570 Multi-path I/O 00:11:17.570 May have multiple subsystem ports: No 00:11:17.570 May have multiple controllers: No 00:11:17.570 Associated with SR-IOV VF: No 00:11:17.570 Max Data Transfer Size: 524288 00:11:17.570 Max Number of Namespaces: 256 00:11:17.570 Max Number of I/O Queues: 64 00:11:17.570 NVMe Specification Version (VS): 1.4 00:11:17.570 NVMe Specification Version (Identify): 1.4 00:11:17.570 Maximum Queue Entries: 2048 00:11:17.570 Contiguous Queues Required: Yes 00:11:17.570 Arbitration Mechanisms Supported 00:11:17.570 Weighted Round Robin: Not Supported 00:11:17.570 Vendor Specific: Not Supported 00:11:17.570 Reset Timeout: 7500 ms 00:11:17.570 Doorbell Stride: 4 bytes 00:11:17.570 NVM Subsystem Reset: Not Supported 00:11:17.570 Command Sets Supported 00:11:17.570 NVM Command Set: Supported 00:11:17.570 Boot Partition: Not Supported 00:11:17.570 Memory Page Size Minimum: 4096 bytes 00:11:17.570 Memory Page Size Maximum: 65536 bytes 00:11:17.570 Persistent Memory Region: Not Supported 00:11:17.570 Optional Asynchronous Events Supported 00:11:17.570 Namespace Attribute Notices: Supported 00:11:17.570 Firmware Activation Notices: Not Supported 00:11:17.570 ANA Change Notices: Not Supported 00:11:17.570 PLE Aggregate Log Change Notices: Not Supported 00:11:17.570 LBA Status Info Alert Notices: Not Supported 00:11:17.570 EGE Aggregate Log Change Notices: Not Supported 00:11:17.570 Normal NVM Subsystem Shutdown event: Not Supported 00:11:17.570 Zone Descriptor Change Notices: Not Supported 00:11:17.570 Discovery Log Change Notices: Not Supported 00:11:17.570 Controller Attributes 00:11:17.570 128-bit Host Identifier: Not Supported 00:11:17.570 Non-Operational Permissive Mode: Not Supported 00:11:17.570 NVM Sets: Not Supported 00:11:17.570 Read Recovery Levels: Not Supported 00:11:17.570 Endurance Groups: Not Supported 00:11:17.570 Predictable Latency Mode: Not Supported 00:11:17.570 Traffic Based Keep ALive: Not Supported 00:11:17.570 Namespace Granularity: Not Supported 00:11:17.570 SQ Associations: Not Supported 00:11:17.570 UUID List: Not Supported 00:11:17.570 Multi-Domain Subsystem: Not Supported 00:11:17.570 Fixed Capacity Management: Not Supported 00:11:17.570 Variable Capacity Management: Not Supported 00:11:17.570 Delete Endurance Group: Not Supported 00:11:17.570 Delete NVM Set: Not Supported 00:11:17.570 Extended LBA Formats Supported: Supported 00:11:17.570 Flexible Data Placement Supported: Not Supported 00:11:17.570 00:11:17.570 Controller Memory Buffer Support 00:11:17.570 ================================ 00:11:17.570 Supported: No 00:11:17.570 00:11:17.570 Persistent Memory Region Support 00:11:17.570 ================================ 00:11:17.570 Supported: No 00:11:17.570 00:11:17.570 Admin Command Set Attributes 00:11:17.570 ============================ 00:11:17.570 Security Send/Receive: Not Supported 00:11:17.570 Format NVM: Supported 00:11:17.570 Firmware Activate/Download: Not Supported 00:11:17.570 Namespace Management: Supported 00:11:17.570 Device Self-Test: Not Supported 00:11:17.570 Directives: Supported 00:11:17.570 NVMe-MI: Not Supported 00:11:17.570 Virtualization Management: Not Supported 00:11:17.570 Doorbell Buffer Config: Supported 00:11:17.570 Get LBA Status Capability: Not Supported 00:11:17.570 Command & Feature Lockdown Capability: Not Supported 00:11:17.570 Abort Command Limit: 4 00:11:17.571 Async Event Request Limit: 4 00:11:17.571 Number of Firmware Slots: N/A 00:11:17.571 Firmware Slot 1 Read-Only: N/A 00:11:17.571 Firmware Activation Without Reset: N/A 00:11:17.571 Multiple Update Detection Support: N/A 00:11:17.571 Firmware Update Granularity: No Information Provided 00:11:17.571 Per-Namespace SMART Log: Yes 00:11:17.571 Asymmetric Namespace Access Log Page: Not Supported 00:11:17.571 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:17.571 Command Effects Log Page: Supported 00:11:17.571 Get Log Page Extended Data: Supported 00:11:17.571 Telemetry Log Pages: Not Supported 00:11:17.571 Persistent Event Log Pages: Not Supported 00:11:17.571 Supported Log Pages Log Page: May Support 00:11:17.571 Commands Supported & Effects Log Page: Not Supported 00:11:17.571 Feature Identifiers & Effects Log Page:May Support 00:11:17.571 NVMe-MI Commands & Effects Log Page: May Support 00:11:17.571 Data Area 4 for Telemetry Log: Not Supported 00:11:17.571 Error Log Page Entries Supported: 1 00:11:17.571 Keep Alive: Not Supported 00:11:17.571 00:11:17.571 NVM Command Set Attributes 00:11:17.571 ========================== 00:11:17.571 Submission Queue Entry Size 00:11:17.571 Max: 64 00:11:17.571 Min: 64 00:11:17.571 Completion Queue Entry Size 00:11:17.571 Max: 16 00:11:17.571 Min: 16 00:11:17.571 Number of Namespaces: 256 00:11:17.571 Compare Command: Supported 00:11:17.571 Write Uncorrectable Command: Not Supported 00:11:17.571 Dataset Management Command: Supported 00:11:17.571 Write Zeroes Command: Supported 00:11:17.571 Set Features Save Field: Supported 00:11:17.571 Reservations: Not Supported 00:11:17.571 Timestamp: Supported 00:11:17.571 Copy: Supported 00:11:17.571 Volatile Write Cache: Present 00:11:17.571 Atomic Write Unit (Normal): 1 00:11:17.571 Atomic Write Unit (PFail): 1 00:11:17.571 Atomic Compare & Write Unit: 1 00:11:17.571 Fused Compare & Write: Not Supported 00:11:17.571 Scatter-Gather List 00:11:17.571 SGL Command Set: Supported 00:11:17.571 SGL Keyed: Not Supported 00:11:17.571 SGL Bit Bucket Descriptor: Not Supported 00:11:17.571 SGL Metadata Pointer: Not Supported 00:11:17.571 Oversized SGL: Not Supported 00:11:17.571 SGL Metadata Address: Not Supported 00:11:17.571 SGL Offset: Not Supported 00:11:17.571 Transport SGL Data Block: Not Supported 00:11:17.571 Replay Protected Memory Block: Not Supported 00:11:17.571 00:11:17.571 Firmware Slot Information 00:11:17.571 ========================= 00:11:17.571 Active slot: 1 00:11:17.571 Slot 1 Firmware Revision: 1.0 00:11:17.571 00:11:17.571 00:11:17.571 Commands Supported and Effects 00:11:17.571 ============================== 00:11:17.571 Admin Commands 00:11:17.571 -------------- 00:11:17.571 Delete I/O Submission Queue (00h): Supported 00:11:17.571 Create I/O Submission Queue (01h): Supported 00:11:17.571 Get Log Page (02h): Supported 00:11:17.571 Delete I/O Completion Queue (04h): Supported 00:11:17.571 Create I/O Completion Queue (05h): Supported 00:11:17.571 Identify (06h): Supported 00:11:17.571 Abort (08h): Supported 00:11:17.571 Set Features (09h): Supported 00:11:17.571 Get Features (0Ah): Supported 00:11:17.571 Asynchronous Event Request (0Ch): Supported 00:11:17.571 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:17.571 Directive Send (19h): Supported 00:11:17.571 Directive Receive (1Ah): Supported 00:11:17.571 Virtualization Management (1Ch): Supported 00:11:17.571 Doorbell Buffer Config (7Ch): Supported 00:11:17.571 Format NVM (80h): Supported LBA-Change 00:11:17.571 I/O Commands 00:11:17.571 ------------ 00:11:17.571 Flush (00h): Supported LBA-Change 00:11:17.571 Write (01h): Supported LBA-Change 00:11:17.571 Read (02h): Supported 00:11:17.571 Compare (05h): Supported 00:11:17.571 Write Zeroes (08h): Supported LBA-Change 00:11:17.571 Dataset Management (09h): Supported LBA-Change 00:11:17.571 Unknown (0Ch): Supported 00:11:17.571 Unknown (12h): Supported 00:11:17.571 Copy (19h): Supported LBA-Change 00:11:17.571 Unknown (1Dh): Supported LBA-Change 00:11:17.571 00:11:17.571 Error Log 00:11:17.571 ========= 00:11:17.571 00:11:17.571 Arbitration 00:11:17.571 =========== 00:11:17.571 Arbitration Burst: no limit 00:11:17.571 00:11:17.571 Power Management 00:11:17.571 ================ 00:11:17.571 Number of Power States: 1 00:11:17.571 Current Power State: Power State #0 00:11:17.571 Power State #0: 00:11:17.571 Max Power: 25.00 W 00:11:17.571 Non-Operational State: Operational 00:11:17.571 Entry Latency: 16 microseconds 00:11:17.571 Exit Latency: 4 microseconds 00:11:17.571 Relative Read Throughput: 0 00:11:17.571 Relative Read Latency: 0 00:11:17.571 Relative Write Throughput: 0 00:11:17.571 Relative Write Latency: 0 00:11:17.571 Idle Power: Not Reported 00:11:17.571 Active Power: Not Reported 00:11:17.571 Non-Operational Permissive Mode: Not Supported 00:11:17.571 00:11:17.571 Health Information 00:11:17.571 ================== 00:11:17.571 Critical Warnings: 00:11:17.571 Available Spare Space: OK 00:11:17.571 Temperature: OK 00:11:17.571 Device Reliability: OK 00:11:17.571 Read Only: No 00:11:17.571 Volatile Memory Backup: OK 00:11:17.571 Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.571 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:17.571 Available Spare: 0% 00:11:17.571 Available Spare Threshold: 0% 00:11:17.571 Life Percentage Used: 0% 00:11:17.571 Data Units Read: 2252 00:11:17.571 Data Units Written: 1932 00:11:17.571 Host Read Commands: 104638 00:11:17.571 Host Write Commands: 100409 00:11:17.571 Controller Busy Time: 0 minutes 00:11:17.571 Power Cycles: 0 00:11:17.571 Power On Hours: 0 hours 00:11:17.571 Unsafe Shutdowns: 0 00:11:17.571 Unrecoverable Media Errors: 0 00:11:17.571 Lifetime Error Log Entries: 0 00:11:17.571 Warning Temperature Time: 0 minutes 00:11:17.571 Critical Temperature Time: 0 minutes 00:11:17.571 00:11:17.571 Number of Queues 00:11:17.571 ================ 00:11:17.571 Number of I/O Submission Queues: 64 00:11:17.571 Number of I/O Completion Queues: 64 00:11:17.571 00:11:17.571 ZNS Specific Controller Data 00:11:17.571 ============================ 00:11:17.571 Zone Append Size Limit: 0 00:11:17.571 00:11:17.571 00:11:17.571 Active Namespaces 00:11:17.571 ================= 00:11:17.571 Namespace ID:1 00:11:17.571 Error Recovery Timeout: Unlimited 00:11:17.571 Command Set Identifier: NVM (00h) 00:11:17.571 Deallocate: Supported 00:11:17.571 Deallocated/Unwritten Error: Supported 00:11:17.571 Deallocated Read Value: All 0x00 00:11:17.571 Deallocate in Write Zeroes: Not Supported 00:11:17.571 Deallocated Guard Field: 0xFFFF 00:11:17.571 Flush: Supported 00:11:17.571 Reservation: Not Supported 00:11:17.571 Namespace Sharing Capabilities: Private 00:11:17.571 Size (in LBAs): 1048576 (4GiB) 00:11:17.571 Capacity (in LBAs): 1048576 (4GiB) 00:11:17.571 Utilization (in LBAs): 1048576 (4GiB) 00:11:17.571 Thin Provisioning: Not Supported 00:11:17.571 Per-NS Atomic Units: No 00:11:17.572 Maximum Sin[2024-04-24 19:25:43.155267] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 69917 terminated unexpected 00:11:17.572 [2024-04-24 19:25:43.156397] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 69917 terminated unexpected 00:11:17.572 [2024-04-24 19:25:43.157113] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 69917 terminated unexpected 00:11:17.572 [2024-04-24 19:25:43.158258] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 69917 terminated unexpected 00:11:17.572 gle Source Range Length: 128 00:11:17.572 Maximum Copy Length: 128 00:11:17.572 Maximum Source Range Count: 128 00:11:17.572 NGUID/EUI64 Never Reused: No 00:11:17.572 Namespace Write Protected: No 00:11:17.572 Number of LBA Formats: 8 00:11:17.572 Current LBA Format: LBA Format #04 00:11:17.572 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:17.572 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:17.572 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:17.572 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:17.572 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:17.572 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:17.572 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:17.572 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:17.572 00:11:17.572 Namespace ID:2 00:11:17.572 Error Recovery Timeout: Unlimited 00:11:17.572 Command Set Identifier: NVM (00h) 00:11:17.572 Deallocate: Supported 00:11:17.572 Deallocated/Unwritten Error: Supported 00:11:17.572 Deallocated Read Value: All 0x00 00:11:17.572 Deallocate in Write Zeroes: Not Supported 00:11:17.572 Deallocated Guard Field: 0xFFFF 00:11:17.572 Flush: Supported 00:11:17.572 Reservation: Not Supported 00:11:17.572 Namespace Sharing Capabilities: Private 00:11:17.572 Size (in LBAs): 1048576 (4GiB) 00:11:17.572 Capacity (in LBAs): 1048576 (4GiB) 00:11:17.572 Utilization (in LBAs): 1048576 (4GiB) 00:11:17.572 Thin Provisioning: Not Supported 00:11:17.572 Per-NS Atomic Units: No 00:11:17.572 Maximum Single Source Range Length: 128 00:11:17.572 Maximum Copy Length: 128 00:11:17.572 Maximum Source Range Count: 128 00:11:17.572 NGUID/EUI64 Never Reused: No 00:11:17.572 Namespace Write Protected: No 00:11:17.572 Number of LBA Formats: 8 00:11:17.572 Current LBA Format: LBA Format #04 00:11:17.572 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:17.572 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:17.572 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:17.572 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:17.572 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:17.572 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:17.572 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:17.572 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:17.572 00:11:17.572 Namespace ID:3 00:11:17.572 Error Recovery Timeout: Unlimited 00:11:17.572 Command Set Identifier: NVM (00h) 00:11:17.572 Deallocate: Supported 00:11:17.572 Deallocated/Unwritten Error: Supported 00:11:17.572 Deallocated Read Value: All 0x00 00:11:17.572 Deallocate in Write Zeroes: Not Supported 00:11:17.572 Deallocated Guard Field: 0xFFFF 00:11:17.572 Flush: Supported 00:11:17.572 Reservation: Not Supported 00:11:17.572 Namespace Sharing Capabilities: Private 00:11:17.572 Size (in LBAs): 1048576 (4GiB) 00:11:17.572 Capacity (in LBAs): 1048576 (4GiB) 00:11:17.572 Utilization (in LBAs): 1048576 (4GiB) 00:11:17.572 Thin Provisioning: Not Supported 00:11:17.572 Per-NS Atomic Units: No 00:11:17.572 Maximum Single Source Range Length: 128 00:11:17.572 Maximum Copy Length: 128 00:11:17.572 Maximum Source Range Count: 128 00:11:17.572 NGUID/EUI64 Never Reused: No 00:11:17.572 Namespace Write Protected: No 00:11:17.572 Number of LBA Formats: 8 00:11:17.572 Current LBA Format: LBA Format #04 00:11:17.572 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:17.572 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:17.572 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:17.572 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:17.572 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:17.572 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:17.572 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:17.572 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:17.572 00:11:17.572 19:25:43 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:17.572 19:25:43 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:17.832 ===================================================== 00:11:17.832 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:17.832 ===================================================== 00:11:17.832 Controller Capabilities/Features 00:11:17.832 ================================ 00:11:17.832 Vendor ID: 1b36 00:11:17.832 Subsystem Vendor ID: 1af4 00:11:17.832 Serial Number: 12340 00:11:17.832 Model Number: QEMU NVMe Ctrl 00:11:17.832 Firmware Version: 8.0.0 00:11:17.832 Recommended Arb Burst: 6 00:11:17.832 IEEE OUI Identifier: 00 54 52 00:11:17.832 Multi-path I/O 00:11:17.832 May have multiple subsystem ports: No 00:11:17.832 May have multiple controllers: No 00:11:17.832 Associated with SR-IOV VF: No 00:11:17.832 Max Data Transfer Size: 524288 00:11:17.832 Max Number of Namespaces: 256 00:11:17.832 Max Number of I/O Queues: 64 00:11:17.832 NVMe Specification Version (VS): 1.4 00:11:17.832 NVMe Specification Version (Identify): 1.4 00:11:17.832 Maximum Queue Entries: 2048 00:11:17.832 Contiguous Queues Required: Yes 00:11:17.832 Arbitration Mechanisms Supported 00:11:17.832 Weighted Round Robin: Not Supported 00:11:17.832 Vendor Specific: Not Supported 00:11:17.832 Reset Timeout: 7500 ms 00:11:17.832 Doorbell Stride: 4 bytes 00:11:17.832 NVM Subsystem Reset: Not Supported 00:11:17.832 Command Sets Supported 00:11:17.832 NVM Command Set: Supported 00:11:17.832 Boot Partition: Not Supported 00:11:17.832 Memory Page Size Minimum: 4096 bytes 00:11:17.832 Memory Page Size Maximum: 65536 bytes 00:11:17.832 Persistent Memory Region: Not Supported 00:11:17.832 Optional Asynchronous Events Supported 00:11:17.832 Namespace Attribute Notices: Supported 00:11:17.832 Firmware Activation Notices: Not Supported 00:11:17.832 ANA Change Notices: Not Supported 00:11:17.832 PLE Aggregate Log Change Notices: Not Supported 00:11:17.832 LBA Status Info Alert Notices: Not Supported 00:11:17.832 EGE Aggregate Log Change Notices: Not Supported 00:11:17.832 Normal NVM Subsystem Shutdown event: Not Supported 00:11:17.832 Zone Descriptor Change Notices: Not Supported 00:11:17.832 Discovery Log Change Notices: Not Supported 00:11:17.832 Controller Attributes 00:11:17.832 128-bit Host Identifier: Not Supported 00:11:17.832 Non-Operational Permissive Mode: Not Supported 00:11:17.832 NVM Sets: Not Supported 00:11:17.832 Read Recovery Levels: Not Supported 00:11:17.832 Endurance Groups: Not Supported 00:11:17.832 Predictable Latency Mode: Not Supported 00:11:17.832 Traffic Based Keep ALive: Not Supported 00:11:17.832 Namespace Granularity: Not Supported 00:11:17.832 SQ Associations: Not Supported 00:11:17.832 UUID List: Not Supported 00:11:17.832 Multi-Domain Subsystem: Not Supported 00:11:17.832 Fixed Capacity Management: Not Supported 00:11:17.832 Variable Capacity Management: Not Supported 00:11:17.832 Delete Endurance Group: Not Supported 00:11:17.832 Delete NVM Set: Not Supported 00:11:17.832 Extended LBA Formats Supported: Supported 00:11:17.832 Flexible Data Placement Supported: Not Supported 00:11:17.832 00:11:17.832 Controller Memory Buffer Support 00:11:17.832 ================================ 00:11:17.832 Supported: No 00:11:17.832 00:11:17.832 Persistent Memory Region Support 00:11:17.832 ================================ 00:11:17.832 Supported: No 00:11:17.832 00:11:17.832 Admin Command Set Attributes 00:11:17.832 ============================ 00:11:17.832 Security Send/Receive: Not Supported 00:11:17.832 Format NVM: Supported 00:11:17.832 Firmware Activate/Download: Not Supported 00:11:17.832 Namespace Management: Supported 00:11:17.832 Device Self-Test: Not Supported 00:11:17.832 Directives: Supported 00:11:17.832 NVMe-MI: Not Supported 00:11:17.832 Virtualization Management: Not Supported 00:11:17.832 Doorbell Buffer Config: Supported 00:11:17.832 Get LBA Status Capability: Not Supported 00:11:17.832 Command & Feature Lockdown Capability: Not Supported 00:11:17.832 Abort Command Limit: 4 00:11:17.832 Async Event Request Limit: 4 00:11:17.832 Number of Firmware Slots: N/A 00:11:17.832 Firmware Slot 1 Read-Only: N/A 00:11:17.832 Firmware Activation Without Reset: N/A 00:11:17.832 Multiple Update Detection Support: N/A 00:11:17.832 Firmware Update Granularity: No Information Provided 00:11:17.832 Per-Namespace SMART Log: Yes 00:11:17.832 Asymmetric Namespace Access Log Page: Not Supported 00:11:17.832 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:17.832 Command Effects Log Page: Supported 00:11:17.832 Get Log Page Extended Data: Supported 00:11:17.832 Telemetry Log Pages: Not Supported 00:11:17.832 Persistent Event Log Pages: Not Supported 00:11:17.832 Supported Log Pages Log Page: May Support 00:11:17.832 Commands Supported & Effects Log Page: Not Supported 00:11:17.832 Feature Identifiers & Effects Log Page:May Support 00:11:17.832 NVMe-MI Commands & Effects Log Page: May Support 00:11:17.832 Data Area 4 for Telemetry Log: Not Supported 00:11:17.832 Error Log Page Entries Supported: 1 00:11:17.832 Keep Alive: Not Supported 00:11:17.832 00:11:17.832 NVM Command Set Attributes 00:11:17.832 ========================== 00:11:17.832 Submission Queue Entry Size 00:11:17.832 Max: 64 00:11:17.832 Min: 64 00:11:17.832 Completion Queue Entry Size 00:11:17.832 Max: 16 00:11:17.832 Min: 16 00:11:17.832 Number of Namespaces: 256 00:11:17.832 Compare Command: Supported 00:11:17.832 Write Uncorrectable Command: Not Supported 00:11:17.832 Dataset Management Command: Supported 00:11:17.832 Write Zeroes Command: Supported 00:11:17.832 Set Features Save Field: Supported 00:11:17.832 Reservations: Not Supported 00:11:17.832 Timestamp: Supported 00:11:17.832 Copy: Supported 00:11:17.832 Volatile Write Cache: Present 00:11:17.832 Atomic Write Unit (Normal): 1 00:11:17.832 Atomic Write Unit (PFail): 1 00:11:17.832 Atomic Compare & Write Unit: 1 00:11:17.832 Fused Compare & Write: Not Supported 00:11:17.832 Scatter-Gather List 00:11:17.832 SGL Command Set: Supported 00:11:17.832 SGL Keyed: Not Supported 00:11:17.832 SGL Bit Bucket Descriptor: Not Supported 00:11:17.832 SGL Metadata Pointer: Not Supported 00:11:17.832 Oversized SGL: Not Supported 00:11:17.832 SGL Metadata Address: Not Supported 00:11:17.832 SGL Offset: Not Supported 00:11:17.832 Transport SGL Data Block: Not Supported 00:11:17.832 Replay Protected Memory Block: Not Supported 00:11:17.832 00:11:17.832 Firmware Slot Information 00:11:17.832 ========================= 00:11:17.832 Active slot: 1 00:11:17.832 Slot 1 Firmware Revision: 1.0 00:11:17.832 00:11:17.832 00:11:17.832 Commands Supported and Effects 00:11:17.832 ============================== 00:11:17.832 Admin Commands 00:11:17.832 -------------- 00:11:17.832 Delete I/O Submission Queue (00h): Supported 00:11:17.832 Create I/O Submission Queue (01h): Supported 00:11:17.832 Get Log Page (02h): Supported 00:11:17.832 Delete I/O Completion Queue (04h): Supported 00:11:17.832 Create I/O Completion Queue (05h): Supported 00:11:17.832 Identify (06h): Supported 00:11:17.832 Abort (08h): Supported 00:11:17.832 Set Features (09h): Supported 00:11:17.832 Get Features (0Ah): Supported 00:11:17.832 Asynchronous Event Request (0Ch): Supported 00:11:17.832 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:17.832 Directive Send (19h): Supported 00:11:17.832 Directive Receive (1Ah): Supported 00:11:17.832 Virtualization Management (1Ch): Supported 00:11:17.832 Doorbell Buffer Config (7Ch): Supported 00:11:17.832 Format NVM (80h): Supported LBA-Change 00:11:17.832 I/O Commands 00:11:17.832 ------------ 00:11:17.832 Flush (00h): Supported LBA-Change 00:11:17.832 Write (01h): Supported LBA-Change 00:11:17.832 Read (02h): Supported 00:11:17.832 Compare (05h): Supported 00:11:17.832 Write Zeroes (08h): Supported LBA-Change 00:11:17.832 Dataset Management (09h): Supported LBA-Change 00:11:17.832 Unknown (0Ch): Supported 00:11:17.832 Unknown (12h): Supported 00:11:17.832 Copy (19h): Supported LBA-Change 00:11:17.832 Unknown (1Dh): Supported LBA-Change 00:11:17.832 00:11:17.832 Error Log 00:11:17.832 ========= 00:11:17.832 00:11:17.832 Arbitration 00:11:17.832 =========== 00:11:17.832 Arbitration Burst: no limit 00:11:17.832 00:11:17.832 Power Management 00:11:17.832 ================ 00:11:17.832 Number of Power States: 1 00:11:17.832 Current Power State: Power State #0 00:11:17.832 Power State #0: 00:11:17.832 Max Power: 25.00 W 00:11:17.832 Non-Operational State: Operational 00:11:17.833 Entry Latency: 16 microseconds 00:11:17.833 Exit Latency: 4 microseconds 00:11:17.833 Relative Read Throughput: 0 00:11:17.833 Relative Read Latency: 0 00:11:17.833 Relative Write Throughput: 0 00:11:17.833 Relative Write Latency: 0 00:11:17.833 Idle Power: Not Reported 00:11:17.833 Active Power: Not Reported 00:11:17.833 Non-Operational Permissive Mode: Not Supported 00:11:17.833 00:11:17.833 Health Information 00:11:17.833 ================== 00:11:17.833 Critical Warnings: 00:11:17.833 Available Spare Space: OK 00:11:17.833 Temperature: OK 00:11:17.833 Device Reliability: OK 00:11:17.833 Read Only: No 00:11:17.833 Volatile Memory Backup: OK 00:11:17.833 Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.833 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:17.833 Available Spare: 0% 00:11:17.833 Available Spare Threshold: 0% 00:11:17.833 Life Percentage Used: 0% 00:11:17.833 Data Units Read: 1057 00:11:17.833 Data Units Written: 888 00:11:17.833 Host Read Commands: 49758 00:11:17.833 Host Write Commands: 48227 00:11:17.833 Controller Busy Time: 0 minutes 00:11:17.833 Power Cycles: 0 00:11:17.833 Power On Hours: 0 hours 00:11:17.833 Unsafe Shutdowns: 0 00:11:17.833 Unrecoverable Media Errors: 0 00:11:17.833 Lifetime Error Log Entries: 0 00:11:17.833 Warning Temperature Time: 0 minutes 00:11:17.833 Critical Temperature Time: 0 minutes 00:11:17.833 00:11:17.833 Number of Queues 00:11:17.833 ================ 00:11:17.833 Number of I/O Submission Queues: 64 00:11:17.833 Number of I/O Completion Queues: 64 00:11:17.833 00:11:17.833 ZNS Specific Controller Data 00:11:17.833 ============================ 00:11:17.833 Zone Append Size Limit: 0 00:11:17.833 00:11:17.833 00:11:17.833 Active Namespaces 00:11:17.833 ================= 00:11:17.833 Namespace ID:1 00:11:17.833 Error Recovery Timeout: Unlimited 00:11:17.833 Command Set Identifier: NVM (00h) 00:11:17.833 Deallocate: Supported 00:11:17.833 Deallocated/Unwritten Error: Supported 00:11:17.833 Deallocated Read Value: All 0x00 00:11:17.833 Deallocate in Write Zeroes: Not Supported 00:11:17.833 Deallocated Guard Field: 0xFFFF 00:11:17.833 Flush: Supported 00:11:17.833 Reservation: Not Supported 00:11:17.833 Metadata Transferred as: Separate Metadata Buffer 00:11:17.833 Namespace Sharing Capabilities: Private 00:11:17.833 Size (in LBAs): 1548666 (5GiB) 00:11:17.833 Capacity (in LBAs): 1548666 (5GiB) 00:11:17.833 Utilization (in LBAs): 1548666 (5GiB) 00:11:17.833 Thin Provisioning: Not Supported 00:11:17.833 Per-NS Atomic Units: No 00:11:17.833 Maximum Single Source Range Length: 128 00:11:17.833 Maximum Copy Length: 128 00:11:17.833 Maximum Source Range Count: 128 00:11:17.833 NGUID/EUI64 Never Reused: No 00:11:17.833 Namespace Write Protected: No 00:11:17.833 Number of LBA Formats: 8 00:11:17.833 Current LBA Format: LBA Format #07 00:11:17.833 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:17.833 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:17.833 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:17.833 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:17.833 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:17.833 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:17.833 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:17.833 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:17.833 00:11:18.091 19:25:43 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:18.091 19:25:43 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:18.351 ===================================================== 00:11:18.351 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:18.351 ===================================================== 00:11:18.351 Controller Capabilities/Features 00:11:18.351 ================================ 00:11:18.351 Vendor ID: 1b36 00:11:18.351 Subsystem Vendor ID: 1af4 00:11:18.351 Serial Number: 12341 00:11:18.351 Model Number: QEMU NVMe Ctrl 00:11:18.351 Firmware Version: 8.0.0 00:11:18.351 Recommended Arb Burst: 6 00:11:18.351 IEEE OUI Identifier: 00 54 52 00:11:18.351 Multi-path I/O 00:11:18.351 May have multiple subsystem ports: No 00:11:18.351 May have multiple controllers: No 00:11:18.351 Associated with SR-IOV VF: No 00:11:18.351 Max Data Transfer Size: 524288 00:11:18.351 Max Number of Namespaces: 256 00:11:18.351 Max Number of I/O Queues: 64 00:11:18.351 NVMe Specification Version (VS): 1.4 00:11:18.351 NVMe Specification Version (Identify): 1.4 00:11:18.351 Maximum Queue Entries: 2048 00:11:18.351 Contiguous Queues Required: Yes 00:11:18.351 Arbitration Mechanisms Supported 00:11:18.351 Weighted Round Robin: Not Supported 00:11:18.351 Vendor Specific: Not Supported 00:11:18.351 Reset Timeout: 7500 ms 00:11:18.351 Doorbell Stride: 4 bytes 00:11:18.351 NVM Subsystem Reset: Not Supported 00:11:18.351 Command Sets Supported 00:11:18.351 NVM Command Set: Supported 00:11:18.351 Boot Partition: Not Supported 00:11:18.351 Memory Page Size Minimum: 4096 bytes 00:11:18.351 Memory Page Size Maximum: 65536 bytes 00:11:18.351 Persistent Memory Region: Not Supported 00:11:18.351 Optional Asynchronous Events Supported 00:11:18.351 Namespace Attribute Notices: Supported 00:11:18.351 Firmware Activation Notices: Not Supported 00:11:18.351 ANA Change Notices: Not Supported 00:11:18.351 PLE Aggregate Log Change Notices: Not Supported 00:11:18.351 LBA Status Info Alert Notices: Not Supported 00:11:18.351 EGE Aggregate Log Change Notices: Not Supported 00:11:18.351 Normal NVM Subsystem Shutdown event: Not Supported 00:11:18.351 Zone Descriptor Change Notices: Not Supported 00:11:18.351 Discovery Log Change Notices: Not Supported 00:11:18.351 Controller Attributes 00:11:18.351 128-bit Host Identifier: Not Supported 00:11:18.351 Non-Operational Permissive Mode: Not Supported 00:11:18.351 NVM Sets: Not Supported 00:11:18.351 Read Recovery Levels: Not Supported 00:11:18.351 Endurance Groups: Not Supported 00:11:18.351 Predictable Latency Mode: Not Supported 00:11:18.351 Traffic Based Keep ALive: Not Supported 00:11:18.351 Namespace Granularity: Not Supported 00:11:18.351 SQ Associations: Not Supported 00:11:18.351 UUID List: Not Supported 00:11:18.351 Multi-Domain Subsystem: Not Supported 00:11:18.351 Fixed Capacity Management: Not Supported 00:11:18.351 Variable Capacity Management: Not Supported 00:11:18.351 Delete Endurance Group: Not Supported 00:11:18.351 Delete NVM Set: Not Supported 00:11:18.351 Extended LBA Formats Supported: Supported 00:11:18.351 Flexible Data Placement Supported: Not Supported 00:11:18.351 00:11:18.351 Controller Memory Buffer Support 00:11:18.351 ================================ 00:11:18.351 Supported: No 00:11:18.351 00:11:18.351 Persistent Memory Region Support 00:11:18.351 ================================ 00:11:18.351 Supported: No 00:11:18.351 00:11:18.351 Admin Command Set Attributes 00:11:18.351 ============================ 00:11:18.351 Security Send/Receive: Not Supported 00:11:18.351 Format NVM: Supported 00:11:18.351 Firmware Activate/Download: Not Supported 00:11:18.351 Namespace Management: Supported 00:11:18.351 Device Self-Test: Not Supported 00:11:18.351 Directives: Supported 00:11:18.351 NVMe-MI: Not Supported 00:11:18.351 Virtualization Management: Not Supported 00:11:18.351 Doorbell Buffer Config: Supported 00:11:18.351 Get LBA Status Capability: Not Supported 00:11:18.351 Command & Feature Lockdown Capability: Not Supported 00:11:18.351 Abort Command Limit: 4 00:11:18.351 Async Event Request Limit: 4 00:11:18.351 Number of Firmware Slots: N/A 00:11:18.351 Firmware Slot 1 Read-Only: N/A 00:11:18.351 Firmware Activation Without Reset: N/A 00:11:18.351 Multiple Update Detection Support: N/A 00:11:18.351 Firmware Update Granularity: No Information Provided 00:11:18.351 Per-Namespace SMART Log: Yes 00:11:18.351 Asymmetric Namespace Access Log Page: Not Supported 00:11:18.351 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:18.351 Command Effects Log Page: Supported 00:11:18.351 Get Log Page Extended Data: Supported 00:11:18.351 Telemetry Log Pages: Not Supported 00:11:18.351 Persistent Event Log Pages: Not Supported 00:11:18.351 Supported Log Pages Log Page: May Support 00:11:18.351 Commands Supported & Effects Log Page: Not Supported 00:11:18.351 Feature Identifiers & Effects Log Page:May Support 00:11:18.351 NVMe-MI Commands & Effects Log Page: May Support 00:11:18.351 Data Area 4 for Telemetry Log: Not Supported 00:11:18.351 Error Log Page Entries Supported: 1 00:11:18.351 Keep Alive: Not Supported 00:11:18.351 00:11:18.351 NVM Command Set Attributes 00:11:18.351 ========================== 00:11:18.351 Submission Queue Entry Size 00:11:18.351 Max: 64 00:11:18.351 Min: 64 00:11:18.351 Completion Queue Entry Size 00:11:18.351 Max: 16 00:11:18.351 Min: 16 00:11:18.351 Number of Namespaces: 256 00:11:18.351 Compare Command: Supported 00:11:18.351 Write Uncorrectable Command: Not Supported 00:11:18.351 Dataset Management Command: Supported 00:11:18.351 Write Zeroes Command: Supported 00:11:18.351 Set Features Save Field: Supported 00:11:18.351 Reservations: Not Supported 00:11:18.351 Timestamp: Supported 00:11:18.351 Copy: Supported 00:11:18.351 Volatile Write Cache: Present 00:11:18.351 Atomic Write Unit (Normal): 1 00:11:18.351 Atomic Write Unit (PFail): 1 00:11:18.351 Atomic Compare & Write Unit: 1 00:11:18.351 Fused Compare & Write: Not Supported 00:11:18.351 Scatter-Gather List 00:11:18.351 SGL Command Set: Supported 00:11:18.351 SGL Keyed: Not Supported 00:11:18.351 SGL Bit Bucket Descriptor: Not Supported 00:11:18.351 SGL Metadata Pointer: Not Supported 00:11:18.351 Oversized SGL: Not Supported 00:11:18.351 SGL Metadata Address: Not Supported 00:11:18.351 SGL Offset: Not Supported 00:11:18.351 Transport SGL Data Block: Not Supported 00:11:18.351 Replay Protected Memory Block: Not Supported 00:11:18.351 00:11:18.351 Firmware Slot Information 00:11:18.351 ========================= 00:11:18.351 Active slot: 1 00:11:18.351 Slot 1 Firmware Revision: 1.0 00:11:18.351 00:11:18.351 00:11:18.351 Commands Supported and Effects 00:11:18.351 ============================== 00:11:18.351 Admin Commands 00:11:18.351 -------------- 00:11:18.351 Delete I/O Submission Queue (00h): Supported 00:11:18.351 Create I/O Submission Queue (01h): Supported 00:11:18.351 Get Log Page (02h): Supported 00:11:18.351 Delete I/O Completion Queue (04h): Supported 00:11:18.351 Create I/O Completion Queue (05h): Supported 00:11:18.351 Identify (06h): Supported 00:11:18.351 Abort (08h): Supported 00:11:18.351 Set Features (09h): Supported 00:11:18.351 Get Features (0Ah): Supported 00:11:18.351 Asynchronous Event Request (0Ch): Supported 00:11:18.351 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:18.351 Directive Send (19h): Supported 00:11:18.351 Directive Receive (1Ah): Supported 00:11:18.351 Virtualization Management (1Ch): Supported 00:11:18.351 Doorbell Buffer Config (7Ch): Supported 00:11:18.351 Format NVM (80h): Supported LBA-Change 00:11:18.351 I/O Commands 00:11:18.351 ------------ 00:11:18.351 Flush (00h): Supported LBA-Change 00:11:18.351 Write (01h): Supported LBA-Change 00:11:18.351 Read (02h): Supported 00:11:18.351 Compare (05h): Supported 00:11:18.351 Write Zeroes (08h): Supported LBA-Change 00:11:18.351 Dataset Management (09h): Supported LBA-Change 00:11:18.351 Unknown (0Ch): Supported 00:11:18.351 Unknown (12h): Supported 00:11:18.351 Copy (19h): Supported LBA-Change 00:11:18.351 Unknown (1Dh): Supported LBA-Change 00:11:18.351 00:11:18.351 Error Log 00:11:18.352 ========= 00:11:18.352 00:11:18.352 Arbitration 00:11:18.352 =========== 00:11:18.352 Arbitration Burst: no limit 00:11:18.352 00:11:18.352 Power Management 00:11:18.352 ================ 00:11:18.352 Number of Power States: 1 00:11:18.352 Current Power State: Power State #0 00:11:18.352 Power State #0: 00:11:18.352 Max Power: 25.00 W 00:11:18.352 Non-Operational State: Operational 00:11:18.352 Entry Latency: 16 microseconds 00:11:18.352 Exit Latency: 4 microseconds 00:11:18.352 Relative Read Throughput: 0 00:11:18.352 Relative Read Latency: 0 00:11:18.352 Relative Write Throughput: 0 00:11:18.352 Relative Write Latency: 0 00:11:18.352 Idle Power: Not Reported 00:11:18.352 Active Power: Not Reported 00:11:18.352 Non-Operational Permissive Mode: Not Supported 00:11:18.352 00:11:18.352 Health Information 00:11:18.352 ================== 00:11:18.352 Critical Warnings: 00:11:18.352 Available Spare Space: OK 00:11:18.352 Temperature: OK 00:11:18.352 Device Reliability: OK 00:11:18.352 Read Only: No 00:11:18.352 Volatile Memory Backup: OK 00:11:18.352 Current Temperature: 323 Kelvin (50 Celsius) 00:11:18.352 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:18.352 Available Spare: 0% 00:11:18.352 Available Spare Threshold: 0% 00:11:18.352 Life Percentage Used: 0% 00:11:18.352 Data Units Read: 760 00:11:18.352 Data Units Written: 608 00:11:18.352 Host Read Commands: 35453 00:11:18.352 Host Write Commands: 33183 00:11:18.352 Controller Busy Time: 0 minutes 00:11:18.352 Power Cycles: 0 00:11:18.352 Power On Hours: 0 hours 00:11:18.352 Unsafe Shutdowns: 0 00:11:18.352 Unrecoverable Media Errors: 0 00:11:18.352 Lifetime Error Log Entries: 0 00:11:18.352 Warning Temperature Time: 0 minutes 00:11:18.352 Critical Temperature Time: 0 minutes 00:11:18.352 00:11:18.352 Number of Queues 00:11:18.352 ================ 00:11:18.352 Number of I/O Submission Queues: 64 00:11:18.352 Number of I/O Completion Queues: 64 00:11:18.352 00:11:18.352 ZNS Specific Controller Data 00:11:18.352 ============================ 00:11:18.352 Zone Append Size Limit: 0 00:11:18.352 00:11:18.352 00:11:18.352 Active Namespaces 00:11:18.352 ================= 00:11:18.352 Namespace ID:1 00:11:18.352 Error Recovery Timeout: Unlimited 00:11:18.352 Command Set Identifier: NVM (00h) 00:11:18.352 Deallocate: Supported 00:11:18.352 Deallocated/Unwritten Error: Supported 00:11:18.352 Deallocated Read Value: All 0x00 00:11:18.352 Deallocate in Write Zeroes: Not Supported 00:11:18.352 Deallocated Guard Field: 0xFFFF 00:11:18.352 Flush: Supported 00:11:18.352 Reservation: Not Supported 00:11:18.352 Namespace Sharing Capabilities: Private 00:11:18.352 Size (in LBAs): 1310720 (5GiB) 00:11:18.352 Capacity (in LBAs): 1310720 (5GiB) 00:11:18.352 Utilization (in LBAs): 1310720 (5GiB) 00:11:18.352 Thin Provisioning: Not Supported 00:11:18.352 Per-NS Atomic Units: No 00:11:18.352 Maximum Single Source Range Length: 128 00:11:18.352 Maximum Copy Length: 128 00:11:18.352 Maximum Source Range Count: 128 00:11:18.352 NGUID/EUI64 Never Reused: No 00:11:18.352 Namespace Write Protected: No 00:11:18.352 Number of LBA Formats: 8 00:11:18.352 Current LBA Format: LBA Format #04 00:11:18.352 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:18.352 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:18.352 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:18.352 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:18.352 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:18.352 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:18.352 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:18.352 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:18.352 00:11:18.352 19:25:43 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:18.352 19:25:43 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:18.610 ===================================================== 00:11:18.610 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:18.610 ===================================================== 00:11:18.610 Controller Capabilities/Features 00:11:18.610 ================================ 00:11:18.610 Vendor ID: 1b36 00:11:18.610 Subsystem Vendor ID: 1af4 00:11:18.610 Serial Number: 12342 00:11:18.610 Model Number: QEMU NVMe Ctrl 00:11:18.610 Firmware Version: 8.0.0 00:11:18.610 Recommended Arb Burst: 6 00:11:18.610 IEEE OUI Identifier: 00 54 52 00:11:18.610 Multi-path I/O 00:11:18.611 May have multiple subsystem ports: No 00:11:18.611 May have multiple controllers: No 00:11:18.611 Associated with SR-IOV VF: No 00:11:18.611 Max Data Transfer Size: 524288 00:11:18.611 Max Number of Namespaces: 256 00:11:18.611 Max Number of I/O Queues: 64 00:11:18.611 NVMe Specification Version (VS): 1.4 00:11:18.611 NVMe Specification Version (Identify): 1.4 00:11:18.611 Maximum Queue Entries: 2048 00:11:18.611 Contiguous Queues Required: Yes 00:11:18.611 Arbitration Mechanisms Supported 00:11:18.611 Weighted Round Robin: Not Supported 00:11:18.611 Vendor Specific: Not Supported 00:11:18.611 Reset Timeout: 7500 ms 00:11:18.611 Doorbell Stride: 4 bytes 00:11:18.611 NVM Subsystem Reset: Not Supported 00:11:18.611 Command Sets Supported 00:11:18.611 NVM Command Set: Supported 00:11:18.611 Boot Partition: Not Supported 00:11:18.611 Memory Page Size Minimum: 4096 bytes 00:11:18.611 Memory Page Size Maximum: 65536 bytes 00:11:18.611 Persistent Memory Region: Not Supported 00:11:18.611 Optional Asynchronous Events Supported 00:11:18.611 Namespace Attribute Notices: Supported 00:11:18.611 Firmware Activation Notices: Not Supported 00:11:18.611 ANA Change Notices: Not Supported 00:11:18.611 PLE Aggregate Log Change Notices: Not Supported 00:11:18.611 LBA Status Info Alert Notices: Not Supported 00:11:18.611 EGE Aggregate Log Change Notices: Not Supported 00:11:18.611 Normal NVM Subsystem Shutdown event: Not Supported 00:11:18.611 Zone Descriptor Change Notices: Not Supported 00:11:18.611 Discovery Log Change Notices: Not Supported 00:11:18.611 Controller Attributes 00:11:18.611 128-bit Host Identifier: Not Supported 00:11:18.611 Non-Operational Permissive Mode: Not Supported 00:11:18.611 NVM Sets: Not Supported 00:11:18.611 Read Recovery Levels: Not Supported 00:11:18.611 Endurance Groups: Not Supported 00:11:18.611 Predictable Latency Mode: Not Supported 00:11:18.611 Traffic Based Keep ALive: Not Supported 00:11:18.611 Namespace Granularity: Not Supported 00:11:18.611 SQ Associations: Not Supported 00:11:18.611 UUID List: Not Supported 00:11:18.611 Multi-Domain Subsystem: Not Supported 00:11:18.611 Fixed Capacity Management: Not Supported 00:11:18.611 Variable Capacity Management: Not Supported 00:11:18.611 Delete Endurance Group: Not Supported 00:11:18.611 Delete NVM Set: Not Supported 00:11:18.611 Extended LBA Formats Supported: Supported 00:11:18.611 Flexible Data Placement Supported: Not Supported 00:11:18.611 00:11:18.611 Controller Memory Buffer Support 00:11:18.611 ================================ 00:11:18.611 Supported: No 00:11:18.611 00:11:18.611 Persistent Memory Region Support 00:11:18.611 ================================ 00:11:18.611 Supported: No 00:11:18.611 00:11:18.611 Admin Command Set Attributes 00:11:18.611 ============================ 00:11:18.611 Security Send/Receive: Not Supported 00:11:18.611 Format NVM: Supported 00:11:18.611 Firmware Activate/Download: Not Supported 00:11:18.611 Namespace Management: Supported 00:11:18.611 Device Self-Test: Not Supported 00:11:18.611 Directives: Supported 00:11:18.611 NVMe-MI: Not Supported 00:11:18.611 Virtualization Management: Not Supported 00:11:18.611 Doorbell Buffer Config: Supported 00:11:18.611 Get LBA Status Capability: Not Supported 00:11:18.611 Command & Feature Lockdown Capability: Not Supported 00:11:18.611 Abort Command Limit: 4 00:11:18.611 Async Event Request Limit: 4 00:11:18.611 Number of Firmware Slots: N/A 00:11:18.611 Firmware Slot 1 Read-Only: N/A 00:11:18.611 Firmware Activation Without Reset: N/A 00:11:18.611 Multiple Update Detection Support: N/A 00:11:18.611 Firmware Update Granularity: No Information Provided 00:11:18.611 Per-Namespace SMART Log: Yes 00:11:18.611 Asymmetric Namespace Access Log Page: Not Supported 00:11:18.611 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:18.611 Command Effects Log Page: Supported 00:11:18.611 Get Log Page Extended Data: Supported 00:11:18.611 Telemetry Log Pages: Not Supported 00:11:18.611 Persistent Event Log Pages: Not Supported 00:11:18.611 Supported Log Pages Log Page: May Support 00:11:18.611 Commands Supported & Effects Log Page: Not Supported 00:11:18.611 Feature Identifiers & Effects Log Page:May Support 00:11:18.611 NVMe-MI Commands & Effects Log Page: May Support 00:11:18.611 Data Area 4 for Telemetry Log: Not Supported 00:11:18.611 Error Log Page Entries Supported: 1 00:11:18.611 Keep Alive: Not Supported 00:11:18.611 00:11:18.611 NVM Command Set Attributes 00:11:18.611 ========================== 00:11:18.611 Submission Queue Entry Size 00:11:18.611 Max: 64 00:11:18.611 Min: 64 00:11:18.611 Completion Queue Entry Size 00:11:18.611 Max: 16 00:11:18.611 Min: 16 00:11:18.611 Number of Namespaces: 256 00:11:18.611 Compare Command: Supported 00:11:18.611 Write Uncorrectable Command: Not Supported 00:11:18.611 Dataset Management Command: Supported 00:11:18.611 Write Zeroes Command: Supported 00:11:18.611 Set Features Save Field: Supported 00:11:18.611 Reservations: Not Supported 00:11:18.611 Timestamp: Supported 00:11:18.611 Copy: Supported 00:11:18.611 Volatile Write Cache: Present 00:11:18.611 Atomic Write Unit (Normal): 1 00:11:18.611 Atomic Write Unit (PFail): 1 00:11:18.611 Atomic Compare & Write Unit: 1 00:11:18.611 Fused Compare & Write: Not Supported 00:11:18.611 Scatter-Gather List 00:11:18.611 SGL Command Set: Supported 00:11:18.611 SGL Keyed: Not Supported 00:11:18.611 SGL Bit Bucket Descriptor: Not Supported 00:11:18.611 SGL Metadata Pointer: Not Supported 00:11:18.611 Oversized SGL: Not Supported 00:11:18.611 SGL Metadata Address: Not Supported 00:11:18.611 SGL Offset: Not Supported 00:11:18.611 Transport SGL Data Block: Not Supported 00:11:18.611 Replay Protected Memory Block: Not Supported 00:11:18.611 00:11:18.611 Firmware Slot Information 00:11:18.612 ========================= 00:11:18.612 Active slot: 1 00:11:18.612 Slot 1 Firmware Revision: 1.0 00:11:18.612 00:11:18.612 00:11:18.612 Commands Supported and Effects 00:11:18.612 ============================== 00:11:18.612 Admin Commands 00:11:18.612 -------------- 00:11:18.612 Delete I/O Submission Queue (00h): Supported 00:11:18.612 Create I/O Submission Queue (01h): Supported 00:11:18.612 Get Log Page (02h): Supported 00:11:18.612 Delete I/O Completion Queue (04h): Supported 00:11:18.612 Create I/O Completion Queue (05h): Supported 00:11:18.612 Identify (06h): Supported 00:11:18.612 Abort (08h): Supported 00:11:18.612 Set Features (09h): Supported 00:11:18.612 Get Features (0Ah): Supported 00:11:18.612 Asynchronous Event Request (0Ch): Supported 00:11:18.612 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:18.612 Directive Send (19h): Supported 00:11:18.612 Directive Receive (1Ah): Supported 00:11:18.612 Virtualization Management (1Ch): Supported 00:11:18.612 Doorbell Buffer Config (7Ch): Supported 00:11:18.612 Format NVM (80h): Supported LBA-Change 00:11:18.612 I/O Commands 00:11:18.612 ------------ 00:11:18.612 Flush (00h): Supported LBA-Change 00:11:18.612 Write (01h): Supported LBA-Change 00:11:18.612 Read (02h): Supported 00:11:18.612 Compare (05h): Supported 00:11:18.612 Write Zeroes (08h): Supported LBA-Change 00:11:18.612 Dataset Management (09h): Supported LBA-Change 00:11:18.612 Unknown (0Ch): Supported 00:11:18.612 Unknown (12h): Supported 00:11:18.612 Copy (19h): Supported LBA-Change 00:11:18.612 Unknown (1Dh): Supported LBA-Change 00:11:18.612 00:11:18.612 Error Log 00:11:18.612 ========= 00:11:18.612 00:11:18.612 Arbitration 00:11:18.612 =========== 00:11:18.612 Arbitration Burst: no limit 00:11:18.612 00:11:18.612 Power Management 00:11:18.612 ================ 00:11:18.612 Number of Power States: 1 00:11:18.612 Current Power State: Power State #0 00:11:18.612 Power State #0: 00:11:18.612 Max Power: 25.00 W 00:11:18.612 Non-Operational State: Operational 00:11:18.612 Entry Latency: 16 microseconds 00:11:18.612 Exit Latency: 4 microseconds 00:11:18.612 Relative Read Throughput: 0 00:11:18.612 Relative Read Latency: 0 00:11:18.612 Relative Write Throughput: 0 00:11:18.612 Relative Write Latency: 0 00:11:18.612 Idle Power: Not Reported 00:11:18.612 Active Power: Not Reported 00:11:18.612 Non-Operational Permissive Mode: Not Supported 00:11:18.612 00:11:18.612 Health Information 00:11:18.612 ================== 00:11:18.612 Critical Warnings: 00:11:18.612 Available Spare Space: OK 00:11:18.612 Temperature: OK 00:11:18.612 Device Reliability: OK 00:11:18.612 Read Only: No 00:11:18.612 Volatile Memory Backup: OK 00:11:18.612 Current Temperature: 323 Kelvin (50 Celsius) 00:11:18.612 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:18.612 Available Spare: 0% 00:11:18.612 Available Spare Threshold: 0% 00:11:18.612 Life Percentage Used: 0% 00:11:18.612 Data Units Read: 2252 00:11:18.612 Data Units Written: 1932 00:11:18.612 Host Read Commands: 104638 00:11:18.612 Host Write Commands: 100409 00:11:18.612 Controller Busy Time: 0 minutes 00:11:18.612 Power Cycles: 0 00:11:18.612 Power On Hours: 0 hours 00:11:18.612 Unsafe Shutdowns: 0 00:11:18.612 Unrecoverable Media Errors: 0 00:11:18.612 Lifetime Error Log Entries: 0 00:11:18.612 Warning Temperature Time: 0 minutes 00:11:18.612 Critical Temperature Time: 0 minutes 00:11:18.612 00:11:18.612 Number of Queues 00:11:18.612 ================ 00:11:18.612 Number of I/O Submission Queues: 64 00:11:18.612 Number of I/O Completion Queues: 64 00:11:18.612 00:11:18.612 ZNS Specific Controller Data 00:11:18.612 ============================ 00:11:18.612 Zone Append Size Limit: 0 00:11:18.612 00:11:18.612 00:11:18.612 Active Namespaces 00:11:18.612 ================= 00:11:18.612 Namespace ID:1 00:11:18.612 Error Recovery Timeout: Unlimited 00:11:18.612 Command Set Identifier: NVM (00h) 00:11:18.612 Deallocate: Supported 00:11:18.612 Deallocated/Unwritten Error: Supported 00:11:18.612 Deallocated Read Value: All 0x00 00:11:18.612 Deallocate in Write Zeroes: Not Supported 00:11:18.612 Deallocated Guard Field: 0xFFFF 00:11:18.612 Flush: Supported 00:11:18.612 Reservation: Not Supported 00:11:18.612 Namespace Sharing Capabilities: Private 00:11:18.612 Size (in LBAs): 1048576 (4GiB) 00:11:18.612 Capacity (in LBAs): 1048576 (4GiB) 00:11:18.612 Utilization (in LBAs): 1048576 (4GiB) 00:11:18.612 Thin Provisioning: Not Supported 00:11:18.612 Per-NS Atomic Units: No 00:11:18.612 Maximum Single Source Range Length: 128 00:11:18.612 Maximum Copy Length: 128 00:11:18.612 Maximum Source Range Count: 128 00:11:18.612 NGUID/EUI64 Never Reused: No 00:11:18.612 Namespace Write Protected: No 00:11:18.612 Number of LBA Formats: 8 00:11:18.612 Current LBA Format: LBA Format #04 00:11:18.612 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:18.612 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:18.612 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:18.612 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:18.612 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:18.612 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:18.612 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:18.612 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:18.612 00:11:18.612 Namespace ID:2 00:11:18.612 Error Recovery Timeout: Unlimited 00:11:18.612 Command Set Identifier: NVM (00h) 00:11:18.612 Deallocate: Supported 00:11:18.613 Deallocated/Unwritten Error: Supported 00:11:18.613 Deallocated Read Value: All 0x00 00:11:18.613 Deallocate in Write Zeroes: Not Supported 00:11:18.613 Deallocated Guard Field: 0xFFFF 00:11:18.613 Flush: Supported 00:11:18.613 Reservation: Not Supported 00:11:18.613 Namespace Sharing Capabilities: Private 00:11:18.613 Size (in LBAs): 1048576 (4GiB) 00:11:18.613 Capacity (in LBAs): 1048576 (4GiB) 00:11:18.613 Utilization (in LBAs): 1048576 (4GiB) 00:11:18.613 Thin Provisioning: Not Supported 00:11:18.613 Per-NS Atomic Units: No 00:11:18.613 Maximum Single Source Range Length: 128 00:11:18.613 Maximum Copy Length: 128 00:11:18.613 Maximum Source Range Count: 128 00:11:18.613 NGUID/EUI64 Never Reused: No 00:11:18.613 Namespace Write Protected: No 00:11:18.613 Number of LBA Formats: 8 00:11:18.613 Current LBA Format: LBA Format #04 00:11:18.613 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:18.613 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:18.613 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:18.613 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:18.613 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:18.613 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:18.613 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:18.613 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:18.613 00:11:18.613 Namespace ID:3 00:11:18.613 Error Recovery Timeout: Unlimited 00:11:18.613 Command Set Identifier: NVM (00h) 00:11:18.613 Deallocate: Supported 00:11:18.613 Deallocated/Unwritten Error: Supported 00:11:18.613 Deallocated Read Value: All 0x00 00:11:18.613 Deallocate in Write Zeroes: Not Supported 00:11:18.613 Deallocated Guard Field: 0xFFFF 00:11:18.613 Flush: Supported 00:11:18.613 Reservation: Not Supported 00:11:18.613 Namespace Sharing Capabilities: Private 00:11:18.613 Size (in LBAs): 1048576 (4GiB) 00:11:18.613 Capacity (in LBAs): 1048576 (4GiB) 00:11:18.613 Utilization (in LBAs): 1048576 (4GiB) 00:11:18.613 Thin Provisioning: Not Supported 00:11:18.613 Per-NS Atomic Units: No 00:11:18.613 Maximum Single Source Range Length: 128 00:11:18.613 Maximum Copy Length: 128 00:11:18.613 Maximum Source Range Count: 128 00:11:18.613 NGUID/EUI64 Never Reused: No 00:11:18.613 Namespace Write Protected: No 00:11:18.613 Number of LBA Formats: 8 00:11:18.613 Current LBA Format: LBA Format #04 00:11:18.613 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:18.613 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:18.613 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:18.613 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:18.613 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:18.613 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:18.613 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:18.613 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:18.613 00:11:18.613 19:25:44 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:18.613 19:25:44 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:18.873 ===================================================== 00:11:18.873 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:18.873 ===================================================== 00:11:18.873 Controller Capabilities/Features 00:11:18.873 ================================ 00:11:18.873 Vendor ID: 1b36 00:11:18.873 Subsystem Vendor ID: 1af4 00:11:18.873 Serial Number: 12343 00:11:18.873 Model Number: QEMU NVMe Ctrl 00:11:18.873 Firmware Version: 8.0.0 00:11:18.873 Recommended Arb Burst: 6 00:11:18.873 IEEE OUI Identifier: 00 54 52 00:11:18.873 Multi-path I/O 00:11:18.873 May have multiple subsystem ports: No 00:11:18.873 May have multiple controllers: Yes 00:11:18.873 Associated with SR-IOV VF: No 00:11:18.873 Max Data Transfer Size: 524288 00:11:18.873 Max Number of Namespaces: 256 00:11:18.873 Max Number of I/O Queues: 64 00:11:18.873 NVMe Specification Version (VS): 1.4 00:11:18.873 NVMe Specification Version (Identify): 1.4 00:11:18.873 Maximum Queue Entries: 2048 00:11:18.873 Contiguous Queues Required: Yes 00:11:18.873 Arbitration Mechanisms Supported 00:11:18.873 Weighted Round Robin: Not Supported 00:11:18.873 Vendor Specific: Not Supported 00:11:18.873 Reset Timeout: 7500 ms 00:11:18.873 Doorbell Stride: 4 bytes 00:11:18.873 NVM Subsystem Reset: Not Supported 00:11:18.873 Command Sets Supported 00:11:18.873 NVM Command Set: Supported 00:11:18.873 Boot Partition: Not Supported 00:11:18.873 Memory Page Size Minimum: 4096 bytes 00:11:18.873 Memory Page Size Maximum: 65536 bytes 00:11:18.873 Persistent Memory Region: Not Supported 00:11:18.873 Optional Asynchronous Events Supported 00:11:18.873 Namespace Attribute Notices: Supported 00:11:18.873 Firmware Activation Notices: Not Supported 00:11:18.873 ANA Change Notices: Not Supported 00:11:18.873 PLE Aggregate Log Change Notices: Not Supported 00:11:18.873 LBA Status Info Alert Notices: Not Supported 00:11:18.873 EGE Aggregate Log Change Notices: Not Supported 00:11:18.873 Normal NVM Subsystem Shutdown event: Not Supported 00:11:18.873 Zone Descriptor Change Notices: Not Supported 00:11:18.873 Discovery Log Change Notices: Not Supported 00:11:18.873 Controller Attributes 00:11:18.873 128-bit Host Identifier: Not Supported 00:11:18.873 Non-Operational Permissive Mode: Not Supported 00:11:18.873 NVM Sets: Not Supported 00:11:18.873 Read Recovery Levels: Not Supported 00:11:18.873 Endurance Groups: Supported 00:11:18.873 Predictable Latency Mode: Not Supported 00:11:18.873 Traffic Based Keep ALive: Not Supported 00:11:18.873 Namespace Granularity: Not Supported 00:11:18.873 SQ Associations: Not Supported 00:11:18.873 UUID List: Not Supported 00:11:18.873 Multi-Domain Subsystem: Not Supported 00:11:18.873 Fixed Capacity Management: Not Supported 00:11:18.873 Variable Capacity Management: Not Supported 00:11:18.873 Delete Endurance Group: Not Supported 00:11:18.873 Delete NVM Set: Not Supported 00:11:18.873 Extended LBA Formats Supported: Supported 00:11:18.873 Flexible Data Placement Supported: Supported 00:11:18.873 00:11:18.873 Controller Memory Buffer Support 00:11:18.873 ================================ 00:11:18.873 Supported: No 00:11:18.873 00:11:18.873 Persistent Memory Region Support 00:11:18.873 ================================ 00:11:18.873 Supported: No 00:11:18.873 00:11:18.873 Admin Command Set Attributes 00:11:18.873 ============================ 00:11:18.873 Security Send/Receive: Not Supported 00:11:18.873 Format NVM: Supported 00:11:18.873 Firmware Activate/Download: Not Supported 00:11:18.873 Namespace Management: Supported 00:11:18.873 Device Self-Test: Not Supported 00:11:18.873 Directives: Supported 00:11:18.873 NVMe-MI: Not Supported 00:11:18.873 Virtualization Management: Not Supported 00:11:18.873 Doorbell Buffer Config: Supported 00:11:18.873 Get LBA Status Capability: Not Supported 00:11:18.873 Command & Feature Lockdown Capability: Not Supported 00:11:18.873 Abort Command Limit: 4 00:11:18.873 Async Event Request Limit: 4 00:11:18.873 Number of Firmware Slots: N/A 00:11:18.873 Firmware Slot 1 Read-Only: N/A 00:11:18.873 Firmware Activation Without Reset: N/A 00:11:18.873 Multiple Update Detection Support: N/A 00:11:18.873 Firmware Update Granularity: No Information Provided 00:11:18.873 Per-Namespace SMART Log: Yes 00:11:18.873 Asymmetric Namespace Access Log Page: Not Supported 00:11:18.873 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:18.873 Command Effects Log Page: Supported 00:11:18.873 Get Log Page Extended Data: Supported 00:11:18.873 Telemetry Log Pages: Not Supported 00:11:18.873 Persistent Event Log Pages: Not Supported 00:11:18.873 Supported Log Pages Log Page: May Support 00:11:18.873 Commands Supported & Effects Log Page: Not Supported 00:11:18.873 Feature Identifiers & Effects Log Page:May Support 00:11:18.873 NVMe-MI Commands & Effects Log Page: May Support 00:11:18.873 Data Area 4 for Telemetry Log: Not Supported 00:11:18.873 Error Log Page Entries Supported: 1 00:11:18.873 Keep Alive: Not Supported 00:11:18.873 00:11:18.873 NVM Command Set Attributes 00:11:18.873 ========================== 00:11:18.873 Submission Queue Entry Size 00:11:18.873 Max: 64 00:11:18.873 Min: 64 00:11:18.873 Completion Queue Entry Size 00:11:18.873 Max: 16 00:11:18.873 Min: 16 00:11:18.873 Number of Namespaces: 256 00:11:18.873 Compare Command: Supported 00:11:18.873 Write Uncorrectable Command: Not Supported 00:11:18.873 Dataset Management Command: Supported 00:11:18.873 Write Zeroes Command: Supported 00:11:18.873 Set Features Save Field: Supported 00:11:18.873 Reservations: Not Supported 00:11:18.873 Timestamp: Supported 00:11:18.873 Copy: Supported 00:11:18.873 Volatile Write Cache: Present 00:11:18.873 Atomic Write Unit (Normal): 1 00:11:18.873 Atomic Write Unit (PFail): 1 00:11:18.873 Atomic Compare & Write Unit: 1 00:11:18.873 Fused Compare & Write: Not Supported 00:11:18.873 Scatter-Gather List 00:11:18.873 SGL Command Set: Supported 00:11:18.873 SGL Keyed: Not Supported 00:11:18.873 SGL Bit Bucket Descriptor: Not Supported 00:11:18.873 SGL Metadata Pointer: Not Supported 00:11:18.873 Oversized SGL: Not Supported 00:11:18.873 SGL Metadata Address: Not Supported 00:11:18.873 SGL Offset: Not Supported 00:11:18.873 Transport SGL Data Block: Not Supported 00:11:18.873 Replay Protected Memory Block: Not Supported 00:11:18.873 00:11:18.873 Firmware Slot Information 00:11:18.873 ========================= 00:11:18.873 Active slot: 1 00:11:18.873 Slot 1 Firmware Revision: 1.0 00:11:18.873 00:11:18.873 00:11:18.873 Commands Supported and Effects 00:11:18.873 ============================== 00:11:18.873 Admin Commands 00:11:18.873 -------------- 00:11:18.873 Delete I/O Submission Queue (00h): Supported 00:11:18.873 Create I/O Submission Queue (01h): Supported 00:11:18.873 Get Log Page (02h): Supported 00:11:18.873 Delete I/O Completion Queue (04h): Supported 00:11:18.873 Create I/O Completion Queue (05h): Supported 00:11:18.873 Identify (06h): Supported 00:11:18.873 Abort (08h): Supported 00:11:18.873 Set Features (09h): Supported 00:11:18.873 Get Features (0Ah): Supported 00:11:18.873 Asynchronous Event Request (0Ch): Supported 00:11:18.873 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:18.873 Directive Send (19h): Supported 00:11:18.873 Directive Receive (1Ah): Supported 00:11:18.873 Virtualization Management (1Ch): Supported 00:11:18.873 Doorbell Buffer Config (7Ch): Supported 00:11:18.873 Format NVM (80h): Supported LBA-Change 00:11:18.873 I/O Commands 00:11:18.873 ------------ 00:11:18.873 Flush (00h): Supported LBA-Change 00:11:18.873 Write (01h): Supported LBA-Change 00:11:18.873 Read (02h): Supported 00:11:18.873 Compare (05h): Supported 00:11:18.873 Write Zeroes (08h): Supported LBA-Change 00:11:18.873 Dataset Management (09h): Supported LBA-Change 00:11:18.874 Unknown (0Ch): Supported 00:11:18.874 Unknown (12h): Supported 00:11:18.874 Copy (19h): Supported LBA-Change 00:11:18.874 Unknown (1Dh): Supported LBA-Change 00:11:18.874 00:11:18.874 Error Log 00:11:18.874 ========= 00:11:18.874 00:11:18.874 Arbitration 00:11:18.874 =========== 00:11:18.874 Arbitration Burst: no limit 00:11:18.874 00:11:18.874 Power Management 00:11:18.874 ================ 00:11:18.874 Number of Power States: 1 00:11:18.874 Current Power State: Power State #0 00:11:18.874 Power State #0: 00:11:18.874 Max Power: 25.00 W 00:11:18.874 Non-Operational State: Operational 00:11:18.874 Entry Latency: 16 microseconds 00:11:18.874 Exit Latency: 4 microseconds 00:11:18.874 Relative Read Throughput: 0 00:11:18.874 Relative Read Latency: 0 00:11:18.874 Relative Write Throughput: 0 00:11:18.874 Relative Write Latency: 0 00:11:18.874 Idle Power: Not Reported 00:11:18.874 Active Power: Not Reported 00:11:18.874 Non-Operational Permissive Mode: Not Supported 00:11:18.874 00:11:18.874 Health Information 00:11:18.874 ================== 00:11:18.874 Critical Warnings: 00:11:18.874 Available Spare Space: OK 00:11:18.874 Temperature: OK 00:11:18.874 Device Reliability: OK 00:11:18.874 Read Only: No 00:11:18.874 Volatile Memory Backup: OK 00:11:18.874 Current Temperature: 323 Kelvin (50 Celsius) 00:11:18.874 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:18.874 Available Spare: 0% 00:11:18.874 Available Spare Threshold: 0% 00:11:18.874 Life Percentage Used: 0% 00:11:18.874 Data Units Read: 826 00:11:18.874 Data Units Written: 720 00:11:18.874 Host Read Commands: 35529 00:11:18.874 Host Write Commands: 34119 00:11:18.874 Controller Busy Time: 0 minutes 00:11:18.874 Power Cycles: 0 00:11:18.874 Power On Hours: 0 hours 00:11:18.874 Unsafe Shutdowns: 0 00:11:18.874 Unrecoverable Media Errors: 0 00:11:18.874 Lifetime Error Log Entries: 0 00:11:18.874 Warning Temperature Time: 0 minutes 00:11:18.874 Critical Temperature Time: 0 minutes 00:11:18.874 00:11:18.874 Number of Queues 00:11:18.874 ================ 00:11:18.874 Number of I/O Submission Queues: 64 00:11:18.874 Number of I/O Completion Queues: 64 00:11:18.874 00:11:18.874 ZNS Specific Controller Data 00:11:18.874 ============================ 00:11:18.874 Zone Append Size Limit: 0 00:11:18.874 00:11:18.874 00:11:18.874 Active Namespaces 00:11:18.874 ================= 00:11:18.874 Namespace ID:1 00:11:18.874 Error Recovery Timeout: Unlimited 00:11:18.874 Command Set Identifier: NVM (00h) 00:11:18.874 Deallocate: Supported 00:11:18.874 Deallocated/Unwritten Error: Supported 00:11:18.874 Deallocated Read Value: All 0x00 00:11:18.874 Deallocate in Write Zeroes: Not Supported 00:11:18.874 Deallocated Guard Field: 0xFFFF 00:11:18.874 Flush: Supported 00:11:18.874 Reservation: Not Supported 00:11:18.874 Namespace Sharing Capabilities: Multiple Controllers 00:11:18.874 Size (in LBAs): 262144 (1GiB) 00:11:18.874 Capacity (in LBAs): 262144 (1GiB) 00:11:18.874 Utilization (in LBAs): 262144 (1GiB) 00:11:18.874 Thin Provisioning: Not Supported 00:11:18.874 Per-NS Atomic Units: No 00:11:18.874 Maximum Single Source Range Length: 128 00:11:18.874 Maximum Copy Length: 128 00:11:18.874 Maximum Source Range Count: 128 00:11:18.874 NGUID/EUI64 Never Reused: No 00:11:18.874 Namespace Write Protected: No 00:11:18.874 Endurance group ID: 1 00:11:18.874 Number of LBA Formats: 8 00:11:18.874 Current LBA Format: LBA Format #04 00:11:18.874 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:18.874 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:18.874 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:18.874 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:18.874 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:18.874 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:18.874 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:18.874 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:18.874 00:11:18.874 Get Feature FDP: 00:11:18.874 ================ 00:11:18.874 Enabled: Yes 00:11:18.874 FDP configuration index: 0 00:11:18.874 00:11:18.874 FDP configurations log page 00:11:18.874 =========================== 00:11:18.874 Number of FDP configurations: 1 00:11:18.874 Version: 0 00:11:18.874 Size: 112 00:11:18.874 FDP Configuration Descriptor: 0 00:11:18.874 Descriptor Size: 96 00:11:18.874 Reclaim Group Identifier format: 2 00:11:18.874 FDP Volatile Write Cache: Not Present 00:11:18.874 FDP Configuration: Valid 00:11:18.874 Vendor Specific Size: 0 00:11:18.874 Number of Reclaim Groups: 2 00:11:18.874 Number of Recalim Unit Handles: 8 00:11:18.874 Max Placement Identifiers: 128 00:11:18.874 Number of Namespaces Suppprted: 256 00:11:18.874 Reclaim unit Nominal Size: 6000000 bytes 00:11:18.874 Estimated Reclaim Unit Time Limit: Not Reported 00:11:18.874 RUH Desc #000: RUH Type: Initially Isolated 00:11:18.874 RUH Desc #001: RUH Type: Initially Isolated 00:11:18.874 RUH Desc #002: RUH Type: Initially Isolated 00:11:18.874 RUH Desc #003: RUH Type: Initially Isolated 00:11:18.874 RUH Desc #004: RUH Type: Initially Isolated 00:11:18.874 RUH Desc #005: RUH Type: Initially Isolated 00:11:18.874 RUH Desc #006: RUH Type: Initially Isolated 00:11:18.874 RUH Desc #007: RUH Type: Initially Isolated 00:11:18.874 00:11:18.874 FDP reclaim unit handle usage log page 00:11:18.874 ====================================== 00:11:18.874 Number of Reclaim Unit Handles: 8 00:11:18.874 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:18.874 RUH Usage Desc #001: RUH Attributes: Unused 00:11:18.874 RUH Usage Desc #002: RUH Attributes: Unused 00:11:18.874 RUH Usage Desc #003: RUH Attributes: Unused 00:11:18.874 RUH Usage Desc #004: RUH Attributes: Unused 00:11:18.874 RUH Usage Desc #005: RUH Attributes: Unused 00:11:18.874 RUH Usage Desc #006: RUH Attributes: Unused 00:11:18.874 RUH Usage Desc #007: RUH Attributes: Unused 00:11:18.874 00:11:18.874 FDP statistics log page 00:11:18.874 ======================= 00:11:18.874 Host bytes with metadata written: 451190784 00:11:18.874 Media bytes with metadata written: 451244032 00:11:18.874 Media bytes erased: 0 00:11:18.874 00:11:18.874 FDP events log page 00:11:18.874 =================== 00:11:18.874 Number of FDP events: 0 00:11:18.874 00:11:18.874 00:11:18.874 real 0m1.589s 00:11:18.874 user 0m0.634s 00:11:18.874 sys 0m0.761s 00:11:18.874 19:25:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:18.874 19:25:44 -- common/autotest_common.sh@10 -- # set +x 00:11:18.874 ************************************ 00:11:18.874 END TEST nvme_identify 00:11:18.874 ************************************ 00:11:18.874 19:25:44 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:18.874 19:25:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:18.874 19:25:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.874 19:25:44 -- common/autotest_common.sh@10 -- # set +x 00:11:18.874 ************************************ 00:11:18.874 START TEST nvme_perf 00:11:18.874 ************************************ 00:11:18.874 19:25:44 -- common/autotest_common.sh@1111 -- # nvme_perf 00:11:18.874 19:25:44 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:20.251 Initializing NVMe Controllers 00:11:20.251 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:20.251 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:20.251 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:20.251 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:20.251 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:20.251 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:20.251 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:20.251 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:20.251 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:20.251 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:20.251 Initialization complete. Launching workers. 00:11:20.251 ======================================================== 00:11:20.251 Latency(us) 00:11:20.251 Device Information : IOPS MiB/s Average min max 00:11:20.251 PCIE (0000:00:10.0) NSID 1 from core 0: 13606.56 159.45 9433.64 6829.13 41003.01 00:11:20.251 PCIE (0000:00:11.0) NSID 1 from core 0: 13606.56 159.45 9413.54 6888.74 38560.82 00:11:20.251 PCIE (0000:00:13.0) NSID 1 from core 0: 13606.56 159.45 9391.41 6971.89 37184.93 00:11:20.251 PCIE (0000:00:12.0) NSID 1 from core 0: 13606.56 159.45 9369.63 6942.54 34959.50 00:11:20.251 PCIE (0000:00:12.0) NSID 2 from core 0: 13606.56 159.45 9346.85 6951.20 32718.49 00:11:20.251 PCIE (0000:00:12.0) NSID 3 from core 0: 13606.56 159.45 9325.37 6832.34 30309.46 00:11:20.251 ======================================================== 00:11:20.251 Total : 81639.33 956.71 9380.07 6829.13 41003.01 00:11:20.251 00:11:20.251 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:20.251 ================================================================================= 00:11:20.251 1.00000% : 7125.967us 00:11:20.251 10.00000% : 7612.479us 00:11:20.251 25.00000% : 8013.135us 00:11:20.251 50.00000% : 8528.266us 00:11:20.251 75.00000% : 9615.762us 00:11:20.251 90.00000% : 12191.413us 00:11:20.251 95.00000% : 14538.117us 00:11:20.251 98.00000% : 17972.318us 00:11:20.251 99.00000% : 22551.252us 00:11:20.251 99.50000% : 32281.488us 00:11:20.251 99.90000% : 40523.570us 00:11:20.251 99.99000% : 40981.464us 00:11:20.251 99.99900% : 41210.410us 00:11:20.251 99.99990% : 41210.410us 00:11:20.251 99.99999% : 41210.410us 00:11:20.251 00:11:20.251 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:20.251 ================================================================================= 00:11:20.251 1.00000% : 7183.203us 00:11:20.251 10.00000% : 7669.715us 00:11:20.251 25.00000% : 8013.135us 00:11:20.251 50.00000% : 8471.029us 00:11:20.251 75.00000% : 9672.999us 00:11:20.251 90.00000% : 12076.940us 00:11:20.251 95.00000% : 14538.117us 00:11:20.251 98.00000% : 18201.265us 00:11:20.251 99.00000% : 22322.306us 00:11:20.251 99.50000% : 30220.968us 00:11:20.251 99.90000% : 38005.156us 00:11:20.251 99.99000% : 38691.997us 00:11:20.251 99.99900% : 38691.997us 00:11:20.252 99.99990% : 38691.997us 00:11:20.252 99.99999% : 38691.997us 00:11:20.252 00:11:20.252 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:20.252 ================================================================================= 00:11:20.252 1.00000% : 7183.203us 00:11:20.252 10.00000% : 7669.715us 00:11:20.252 25.00000% : 8013.135us 00:11:20.252 50.00000% : 8471.029us 00:11:20.252 75.00000% : 9615.762us 00:11:20.252 90.00000% : 12248.650us 00:11:20.252 95.00000% : 14595.354us 00:11:20.252 98.00000% : 18315.738us 00:11:20.252 99.00000% : 22322.306us 00:11:20.252 99.50000% : 27931.500us 00:11:20.252 99.90000% : 36860.423us 00:11:20.252 99.99000% : 37318.316us 00:11:20.252 99.99900% : 37318.316us 00:11:20.252 99.99990% : 37318.316us 00:11:20.252 99.99999% : 37318.316us 00:11:20.252 00:11:20.252 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:20.252 ================================================================================= 00:11:20.252 1.00000% : 7211.822us 00:11:20.252 10.00000% : 7669.715us 00:11:20.252 25.00000% : 8013.135us 00:11:20.252 50.00000% : 8471.029us 00:11:20.252 75.00000% : 9615.762us 00:11:20.252 90.00000% : 12134.176us 00:11:20.252 95.00000% : 14652.590us 00:11:20.252 98.00000% : 18544.685us 00:11:20.252 99.00000% : 22322.306us 00:11:20.252 99.50000% : 25298.613us 00:11:20.252 99.90000% : 34570.955us 00:11:20.252 99.99000% : 35028.849us 00:11:20.252 99.99900% : 35028.849us 00:11:20.252 99.99990% : 35028.849us 00:11:20.252 99.99999% : 35028.849us 00:11:20.252 00:11:20.252 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:20.252 ================================================================================= 00:11:20.252 1.00000% : 7211.822us 00:11:20.252 10.00000% : 7669.715us 00:11:20.252 25.00000% : 8013.135us 00:11:20.252 50.00000% : 8471.029us 00:11:20.252 75.00000% : 9672.999us 00:11:20.252 90.00000% : 12477.597us 00:11:20.252 95.00000% : 14767.064us 00:11:20.252 98.00000% : 16598.638us 00:11:20.252 99.00000% : 20719.679us 00:11:20.252 99.50000% : 22207.832us 00:11:20.252 99.90000% : 32281.488us 00:11:20.252 99.99000% : 32739.382us 00:11:20.252 99.99900% : 32739.382us 00:11:20.252 99.99990% : 32739.382us 00:11:20.252 99.99999% : 32739.382us 00:11:20.252 00:11:20.252 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:20.252 ================================================================================= 00:11:20.252 1.00000% : 7183.203us 00:11:20.252 10.00000% : 7669.715us 00:11:20.252 25.00000% : 8013.135us 00:11:20.252 50.00000% : 8471.029us 00:11:20.252 75.00000% : 9672.999us 00:11:20.252 90.00000% : 12248.650us 00:11:20.252 95.00000% : 14595.354us 00:11:20.252 98.00000% : 17399.951us 00:11:20.252 99.00000% : 20032.838us 00:11:20.252 99.50000% : 21749.939us 00:11:20.252 99.90000% : 29992.021us 00:11:20.252 99.99000% : 30449.914us 00:11:20.252 99.99900% : 30449.914us 00:11:20.252 99.99990% : 30449.914us 00:11:20.252 99.99999% : 30449.914us 00:11:20.252 00:11:20.252 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:20.252 ============================================================================== 00:11:20.252 Range in us Cumulative IO count 00:11:20.252 6811.165 - 6839.783: 0.0220% ( 3) 00:11:20.252 6839.783 - 6868.402: 0.0440% ( 3) 00:11:20.252 6868.402 - 6897.020: 0.0954% ( 7) 00:11:20.252 6897.020 - 6925.638: 0.1467% ( 7) 00:11:20.252 6925.638 - 6954.257: 0.2054% ( 8) 00:11:20.252 6954.257 - 6982.875: 0.3008% ( 13) 00:11:20.252 6982.875 - 7011.493: 0.4548% ( 21) 00:11:20.252 7011.493 - 7040.112: 0.5722% ( 16) 00:11:20.252 7040.112 - 7068.730: 0.7409% ( 23) 00:11:20.252 7068.730 - 7097.348: 0.9903% ( 34) 00:11:20.252 7097.348 - 7125.967: 1.2104% ( 30) 00:11:20.252 7125.967 - 7154.585: 1.5112% ( 41) 00:11:20.252 7154.585 - 7183.203: 1.7899% ( 38) 00:11:20.252 7183.203 - 7211.822: 2.2080% ( 57) 00:11:20.252 7211.822 - 7240.440: 2.5968% ( 53) 00:11:20.252 7240.440 - 7269.059: 2.9416% ( 47) 00:11:20.252 7269.059 - 7297.677: 3.4478% ( 69) 00:11:20.252 7297.677 - 7326.295: 3.8806% ( 59) 00:11:20.252 7326.295 - 7383.532: 4.8636% ( 134) 00:11:20.252 7383.532 - 7440.769: 6.0153% ( 157) 00:11:20.252 7440.769 - 7498.005: 7.3797% ( 186) 00:11:20.252 7498.005 - 7555.242: 9.0596% ( 229) 00:11:20.252 7555.242 - 7612.479: 10.7248% ( 227) 00:11:20.252 7612.479 - 7669.715: 12.7054% ( 270) 00:11:20.252 7669.715 - 7726.952: 14.8034% ( 286) 00:11:20.252 7726.952 - 7784.189: 17.1948% ( 326) 00:11:20.252 7784.189 - 7841.425: 19.5276% ( 318) 00:11:20.252 7841.425 - 7898.662: 22.1024% ( 351) 00:11:20.252 7898.662 - 7955.899: 24.6772% ( 351) 00:11:20.252 7955.899 - 8013.135: 27.6408% ( 404) 00:11:20.252 8013.135 - 8070.372: 30.5091% ( 391) 00:11:20.252 8070.372 - 8127.609: 33.3040% ( 381) 00:11:20.252 8127.609 - 8184.845: 36.2016% ( 395) 00:11:20.252 8184.845 - 8242.082: 38.9891% ( 380) 00:11:20.252 8242.082 - 8299.319: 41.7254% ( 373) 00:11:20.252 8299.319 - 8356.555: 44.4396% ( 370) 00:11:20.252 8356.555 - 8413.792: 46.9410% ( 341) 00:11:20.252 8413.792 - 8471.029: 49.4645% ( 344) 00:11:20.252 8471.029 - 8528.266: 51.9660% ( 341) 00:11:20.252 8528.266 - 8585.502: 54.3427% ( 324) 00:11:20.252 8585.502 - 8642.739: 56.5508% ( 301) 00:11:20.252 8642.739 - 8699.976: 58.5461% ( 272) 00:11:20.252 8699.976 - 8757.212: 60.4680% ( 262) 00:11:20.252 8757.212 - 8814.449: 62.2653% ( 245) 00:11:20.252 8814.449 - 8871.686: 63.8424% ( 215) 00:11:20.252 8871.686 - 8928.922: 65.2289% ( 189) 00:11:20.252 8928.922 - 8986.159: 66.5640% ( 182) 00:11:20.252 8986.159 - 9043.396: 67.7744% ( 165) 00:11:20.252 9043.396 - 9100.632: 68.8160% ( 142) 00:11:20.252 9100.632 - 9157.869: 69.7770% ( 131) 00:11:20.252 9157.869 - 9215.106: 70.5399% ( 104) 00:11:20.252 9215.106 - 9272.342: 71.3615% ( 112) 00:11:20.252 9272.342 - 9329.579: 72.0511% ( 94) 00:11:20.252 9329.579 - 9386.816: 72.6599% ( 83) 00:11:20.252 9386.816 - 9444.052: 73.2321% ( 78) 00:11:20.252 9444.052 - 9501.289: 73.8776% ( 88) 00:11:20.252 9501.289 - 9558.526: 74.4938% ( 84) 00:11:20.252 9558.526 - 9615.762: 75.1540% ( 90) 00:11:20.252 9615.762 - 9672.999: 75.7996% ( 88) 00:11:20.252 9672.999 - 9730.236: 76.4231% ( 85) 00:11:20.252 9730.236 - 9787.472: 76.9880% ( 77) 00:11:20.252 9787.472 - 9844.709: 77.5381% ( 75) 00:11:20.252 9844.709 - 9901.946: 78.1617% ( 85) 00:11:20.252 9901.946 - 9959.183: 78.7412% ( 79) 00:11:20.252 9959.183 - 10016.419: 79.3354% ( 81) 00:11:20.252 10016.419 - 10073.656: 80.0029% ( 91) 00:11:20.252 10073.656 - 10130.893: 80.6191% ( 84) 00:11:20.252 10130.893 - 10188.129: 81.2060% ( 80) 00:11:20.252 10188.129 - 10245.366: 81.7708% ( 77) 00:11:20.252 10245.366 - 10302.603: 82.2990% ( 72) 00:11:20.252 10302.603 - 10359.839: 82.8125% ( 70) 00:11:20.252 10359.839 - 10417.076: 83.3407% ( 72) 00:11:20.252 10417.076 - 10474.313: 83.8762% ( 73) 00:11:20.252 10474.313 - 10531.549: 84.3530% ( 65) 00:11:20.252 10531.549 - 10588.786: 84.8665% ( 70) 00:11:20.252 10588.786 - 10646.023: 85.3066% ( 60) 00:11:20.252 10646.023 - 10703.259: 85.7028% ( 54) 00:11:20.252 10703.259 - 10760.496: 86.0182% ( 43) 00:11:20.252 10760.496 - 10817.733: 86.2529% ( 32) 00:11:20.252 10817.733 - 10874.969: 86.5170% ( 36) 00:11:20.252 10874.969 - 10932.206: 86.7518% ( 32) 00:11:20.252 10932.206 - 10989.443: 86.9718% ( 30) 00:11:20.252 10989.443 - 11046.679: 87.2653% ( 40) 00:11:20.252 11046.679 - 11103.916: 87.4780% ( 29) 00:11:20.252 11103.916 - 11161.153: 87.6761% ( 27) 00:11:20.252 11161.153 - 11218.390: 87.8521% ( 24) 00:11:20.252 11218.390 - 11275.626: 88.0208% ( 23) 00:11:20.252 11275.626 - 11332.863: 88.1822% ( 22) 00:11:20.252 11332.863 - 11390.100: 88.3363% ( 21) 00:11:20.252 11390.100 - 11447.336: 88.4903% ( 21) 00:11:20.252 11447.336 - 11504.573: 88.6370% ( 20) 00:11:20.252 11504.573 - 11561.810: 88.7837% ( 20) 00:11:20.252 11561.810 - 11619.046: 88.8864% ( 14) 00:11:20.252 11619.046 - 11676.283: 89.0038% ( 16) 00:11:20.252 11676.283 - 11733.520: 89.1285% ( 17) 00:11:20.252 11733.520 - 11790.756: 89.2459% ( 16) 00:11:20.252 11790.756 - 11847.993: 89.3779% ( 18) 00:11:20.252 11847.993 - 11905.230: 89.5320% ( 21) 00:11:20.252 11905.230 - 11962.466: 89.6273% ( 13) 00:11:20.252 11962.466 - 12019.703: 89.7667% ( 19) 00:11:20.252 12019.703 - 12076.940: 89.8621% ( 13) 00:11:20.252 12076.940 - 12134.176: 89.9721% ( 15) 00:11:20.252 12134.176 - 12191.413: 90.0602% ( 12) 00:11:20.252 12191.413 - 12248.650: 90.1849% ( 17) 00:11:20.252 12248.650 - 12305.886: 90.2582% ( 10) 00:11:20.252 12305.886 - 12363.123: 90.3609% ( 14) 00:11:20.252 12363.123 - 12420.360: 90.4416% ( 11) 00:11:20.252 12420.360 - 12477.597: 90.5150% ( 10) 00:11:20.252 12477.597 - 12534.833: 90.6030% ( 12) 00:11:20.252 12534.833 - 12592.070: 90.7204% ( 16) 00:11:20.252 12592.070 - 12649.307: 90.8157% ( 13) 00:11:20.252 12649.307 - 12706.543: 90.9038% ( 12) 00:11:20.252 12706.543 - 12763.780: 90.9991% ( 13) 00:11:20.252 12763.780 - 12821.017: 91.1018% ( 14) 00:11:20.252 12821.017 - 12878.253: 91.2192% ( 16) 00:11:20.252 12878.253 - 12935.490: 91.3512% ( 18) 00:11:20.253 12935.490 - 12992.727: 91.4906% ( 19) 00:11:20.253 12992.727 - 13049.963: 91.6300% ( 19) 00:11:20.253 13049.963 - 13107.200: 91.7180% ( 12) 00:11:20.253 13107.200 - 13164.437: 91.8281% ( 15) 00:11:20.253 13164.437 - 13221.673: 91.9308% ( 14) 00:11:20.253 13221.673 - 13278.910: 92.0628% ( 18) 00:11:20.253 13278.910 - 13336.147: 92.1728% ( 15) 00:11:20.253 13336.147 - 13393.383: 92.2975% ( 17) 00:11:20.253 13393.383 - 13450.620: 92.4369% ( 19) 00:11:20.253 13450.620 - 13507.857: 92.5690% ( 18) 00:11:20.253 13507.857 - 13565.093: 92.7010% ( 18) 00:11:20.253 13565.093 - 13622.330: 92.8624% ( 22) 00:11:20.253 13622.330 - 13679.567: 93.0091% ( 20) 00:11:20.253 13679.567 - 13736.803: 93.1705% ( 22) 00:11:20.253 13736.803 - 13794.040: 93.3319% ( 22) 00:11:20.253 13794.040 - 13851.277: 93.5079% ( 24) 00:11:20.253 13851.277 - 13908.514: 93.6546% ( 20) 00:11:20.253 13908.514 - 13965.750: 93.7793% ( 17) 00:11:20.253 13965.750 - 14022.987: 93.9261% ( 20) 00:11:20.253 14022.987 - 14080.224: 94.0581% ( 18) 00:11:20.253 14080.224 - 14137.460: 94.1828% ( 17) 00:11:20.253 14137.460 - 14194.697: 94.3075% ( 17) 00:11:20.253 14194.697 - 14251.934: 94.4249% ( 16) 00:11:20.253 14251.934 - 14309.170: 94.5716% ( 20) 00:11:20.253 14309.170 - 14366.407: 94.6890% ( 16) 00:11:20.253 14366.407 - 14423.644: 94.8650% ( 24) 00:11:20.253 14423.644 - 14480.880: 94.9751% ( 15) 00:11:20.253 14480.880 - 14538.117: 95.0924% ( 16) 00:11:20.253 14538.117 - 14595.354: 95.2245% ( 18) 00:11:20.253 14595.354 - 14652.590: 95.3418% ( 16) 00:11:20.253 14652.590 - 14767.064: 95.5472% ( 28) 00:11:20.253 14767.064 - 14881.537: 95.7600% ( 29) 00:11:20.253 14881.537 - 14996.010: 95.9434% ( 25) 00:11:20.253 14996.010 - 15110.484: 96.0827% ( 19) 00:11:20.253 15110.484 - 15224.957: 96.2075% ( 17) 00:11:20.253 15224.957 - 15339.431: 96.3468% ( 19) 00:11:20.253 15339.431 - 15453.904: 96.4789% ( 18) 00:11:20.253 15453.904 - 15568.377: 96.5889% ( 15) 00:11:20.253 15568.377 - 15682.851: 96.6696% ( 11) 00:11:20.253 15682.851 - 15797.324: 96.7503% ( 11) 00:11:20.253 15797.324 - 15911.797: 96.8530% ( 14) 00:11:20.253 15911.797 - 16026.271: 96.9557% ( 14) 00:11:20.253 16026.271 - 16140.744: 97.0584% ( 14) 00:11:20.253 16140.744 - 16255.217: 97.1317% ( 10) 00:11:20.253 16255.217 - 16369.691: 97.2198% ( 12) 00:11:20.253 16369.691 - 16484.164: 97.2785% ( 8) 00:11:20.253 16484.164 - 16598.638: 97.3078% ( 4) 00:11:20.253 16598.638 - 16713.111: 97.3445% ( 5) 00:11:20.253 16713.111 - 16827.584: 97.3812% ( 5) 00:11:20.253 16827.584 - 16942.058: 97.4252% ( 6) 00:11:20.253 16942.058 - 17056.531: 97.4839% ( 8) 00:11:20.253 17056.531 - 17171.004: 97.5572% ( 10) 00:11:20.253 17171.004 - 17285.478: 97.6306% ( 10) 00:11:20.253 17285.478 - 17399.951: 97.6819% ( 7) 00:11:20.253 17399.951 - 17514.424: 97.7479% ( 9) 00:11:20.253 17514.424 - 17628.898: 97.8140% ( 9) 00:11:20.253 17628.898 - 17743.371: 97.8800% ( 9) 00:11:20.253 17743.371 - 17857.845: 97.9607% ( 11) 00:11:20.253 17857.845 - 17972.318: 98.0194% ( 8) 00:11:20.253 17972.318 - 18086.791: 98.0854% ( 9) 00:11:20.253 18086.791 - 18201.265: 98.1221% ( 5) 00:11:20.253 18201.265 - 18315.738: 98.1661% ( 6) 00:11:20.253 18315.738 - 18430.211: 98.2028% ( 5) 00:11:20.253 18430.211 - 18544.685: 98.2394% ( 5) 00:11:20.253 18544.685 - 18659.158: 98.2835% ( 6) 00:11:20.253 18659.158 - 18773.631: 98.3275% ( 6) 00:11:20.253 18773.631 - 18888.105: 98.3715% ( 6) 00:11:20.253 18888.105 - 19002.578: 98.4155% ( 6) 00:11:20.253 19002.578 - 19117.052: 98.4522% ( 5) 00:11:20.253 19117.052 - 19231.525: 98.5109% ( 8) 00:11:20.253 19231.525 - 19345.998: 98.5549% ( 6) 00:11:20.253 19345.998 - 19460.472: 98.5915% ( 5) 00:11:20.253 19460.472 - 19574.945: 98.6282% ( 5) 00:11:20.253 19574.945 - 19689.418: 98.6649% ( 5) 00:11:20.253 19689.418 - 19803.892: 98.6796% ( 2) 00:11:20.253 19803.892 - 19918.365: 98.6942% ( 2) 00:11:20.253 20032.838 - 20147.312: 98.7236% ( 4) 00:11:20.253 20147.312 - 20261.785: 98.7309% ( 1) 00:11:20.253 20261.785 - 20376.259: 98.7456% ( 2) 00:11:20.253 20376.259 - 20490.732: 98.7603% ( 2) 00:11:20.253 20490.732 - 20605.205: 98.7676% ( 1) 00:11:20.253 20605.205 - 20719.679: 98.7749% ( 1) 00:11:20.253 20719.679 - 20834.152: 98.7969% ( 3) 00:11:20.253 20834.152 - 20948.625: 98.8116% ( 2) 00:11:20.253 20948.625 - 21063.099: 98.8263% ( 2) 00:11:20.253 21063.099 - 21177.572: 98.8336% ( 1) 00:11:20.253 21177.572 - 21292.045: 98.8483% ( 2) 00:11:20.253 21292.045 - 21406.519: 98.8630% ( 2) 00:11:20.253 21406.519 - 21520.992: 98.8776% ( 2) 00:11:20.253 21635.466 - 21749.939: 98.9070% ( 4) 00:11:20.253 21864.412 - 21978.886: 98.9290% ( 3) 00:11:20.253 21978.886 - 22093.359: 98.9437% ( 2) 00:11:20.253 22093.359 - 22207.832: 98.9583% ( 2) 00:11:20.253 22207.832 - 22322.306: 98.9730% ( 2) 00:11:20.253 22322.306 - 22436.779: 98.9877% ( 2) 00:11:20.253 22436.779 - 22551.252: 99.0023% ( 2) 00:11:20.253 22551.252 - 22665.726: 99.0170% ( 2) 00:11:20.253 22665.726 - 22780.199: 99.0317% ( 2) 00:11:20.253 22780.199 - 22894.672: 99.0390% ( 1) 00:11:20.253 22894.672 - 23009.146: 99.0537% ( 2) 00:11:20.253 23009.146 - 23123.619: 99.0610% ( 1) 00:11:20.253 29763.074 - 29992.021: 99.0904% ( 4) 00:11:20.253 29992.021 - 30220.968: 99.1271% ( 5) 00:11:20.253 30220.968 - 30449.914: 99.1637% ( 5) 00:11:20.253 30449.914 - 30678.861: 99.2077% ( 6) 00:11:20.253 30678.861 - 30907.808: 99.2444% ( 5) 00:11:20.253 30907.808 - 31136.755: 99.2958% ( 7) 00:11:20.253 31136.755 - 31365.701: 99.3325% ( 5) 00:11:20.253 31365.701 - 31594.648: 99.3765% ( 6) 00:11:20.253 31594.648 - 31823.595: 99.4205% ( 6) 00:11:20.253 31823.595 - 32052.541: 99.4572% ( 5) 00:11:20.253 32052.541 - 32281.488: 99.5012% ( 6) 00:11:20.253 32281.488 - 32510.435: 99.5305% ( 4) 00:11:20.253 38005.156 - 38234.103: 99.5525% ( 3) 00:11:20.253 38234.103 - 38463.050: 99.5965% ( 6) 00:11:20.253 38463.050 - 38691.997: 99.6259% ( 4) 00:11:20.253 38691.997 - 38920.943: 99.6626% ( 5) 00:11:20.253 38920.943 - 39149.890: 99.6992% ( 5) 00:11:20.253 39149.890 - 39378.837: 99.7286% ( 4) 00:11:20.253 39378.837 - 39607.783: 99.7726% ( 6) 00:11:20.253 39607.783 - 39836.730: 99.8166% ( 6) 00:11:20.253 39836.730 - 40065.677: 99.8533% ( 5) 00:11:20.253 40065.677 - 40294.624: 99.8826% ( 4) 00:11:20.253 40294.624 - 40523.570: 99.9193% ( 5) 00:11:20.253 40523.570 - 40752.517: 99.9560% ( 5) 00:11:20.253 40752.517 - 40981.464: 99.9927% ( 5) 00:11:20.253 40981.464 - 41210.410: 100.0000% ( 1) 00:11:20.253 00:11:20.253 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:20.253 ============================================================================== 00:11:20.253 Range in us Cumulative IO count 00:11:20.253 6868.402 - 6897.020: 0.0073% ( 1) 00:11:20.253 6897.020 - 6925.638: 0.0440% ( 5) 00:11:20.253 6925.638 - 6954.257: 0.0880% ( 6) 00:11:20.253 6954.257 - 6982.875: 0.1540% ( 9) 00:11:20.253 6982.875 - 7011.493: 0.1981% ( 6) 00:11:20.253 7011.493 - 7040.112: 0.2641% ( 9) 00:11:20.253 7040.112 - 7068.730: 0.3961% ( 18) 00:11:20.253 7068.730 - 7097.348: 0.5722% ( 24) 00:11:20.253 7097.348 - 7125.967: 0.7702% ( 27) 00:11:20.253 7125.967 - 7154.585: 0.9683% ( 27) 00:11:20.253 7154.585 - 7183.203: 1.1957% ( 31) 00:11:20.253 7183.203 - 7211.822: 1.4671% ( 37) 00:11:20.253 7211.822 - 7240.440: 1.7752% ( 42) 00:11:20.253 7240.440 - 7269.059: 2.1567% ( 52) 00:11:20.253 7269.059 - 7297.677: 2.5308% ( 51) 00:11:20.253 7297.677 - 7326.295: 2.9489% ( 57) 00:11:20.253 7326.295 - 7383.532: 3.9833% ( 141) 00:11:20.253 7383.532 - 7440.769: 5.0323% ( 143) 00:11:20.253 7440.769 - 7498.005: 6.2133% ( 161) 00:11:20.253 7498.005 - 7555.242: 7.7612% ( 211) 00:11:20.253 7555.242 - 7612.479: 9.4704% ( 233) 00:11:20.253 7612.479 - 7669.715: 11.5610% ( 285) 00:11:20.253 7669.715 - 7726.952: 13.7911% ( 304) 00:11:20.253 7726.952 - 7784.189: 16.2045% ( 329) 00:11:20.253 7784.189 - 7841.425: 18.8013% ( 354) 00:11:20.253 7841.425 - 7898.662: 21.6623% ( 390) 00:11:20.253 7898.662 - 7955.899: 24.4792% ( 384) 00:11:20.253 7955.899 - 8013.135: 27.4061% ( 399) 00:11:20.253 8013.135 - 8070.372: 30.4137% ( 410) 00:11:20.253 8070.372 - 8127.609: 33.4580% ( 415) 00:11:20.253 8127.609 - 8184.845: 36.5537% ( 422) 00:11:20.253 8184.845 - 8242.082: 39.6640% ( 424) 00:11:20.253 8242.082 - 8299.319: 42.5249% ( 390) 00:11:20.253 8299.319 - 8356.555: 45.3345% ( 383) 00:11:20.253 8356.555 - 8413.792: 48.0854% ( 375) 00:11:20.253 8413.792 - 8471.029: 50.8509% ( 377) 00:11:20.253 8471.029 - 8528.266: 53.3891% ( 346) 00:11:20.253 8528.266 - 8585.502: 55.8025% ( 329) 00:11:20.253 8585.502 - 8642.739: 58.0472% ( 306) 00:11:20.253 8642.739 - 8699.976: 60.1159% ( 282) 00:11:20.253 8699.976 - 8757.212: 61.8985% ( 243) 00:11:20.253 8757.212 - 8814.449: 63.4243% ( 208) 00:11:20.253 8814.449 - 8871.686: 64.8768% ( 198) 00:11:20.253 8871.686 - 8928.922: 66.2412% ( 186) 00:11:20.253 8928.922 - 8986.159: 67.3782% ( 155) 00:11:20.253 8986.159 - 9043.396: 68.4786% ( 150) 00:11:20.253 9043.396 - 9100.632: 69.4396% ( 131) 00:11:20.253 9100.632 - 9157.869: 70.3492% ( 124) 00:11:20.253 9157.869 - 9215.106: 71.1854% ( 114) 00:11:20.253 9215.106 - 9272.342: 71.9337% ( 102) 00:11:20.253 9272.342 - 9329.579: 72.4765% ( 74) 00:11:20.253 9329.579 - 9386.816: 72.9533% ( 65) 00:11:20.253 9386.816 - 9444.052: 73.4375% ( 66) 00:11:20.253 9444.052 - 9501.289: 73.9583% ( 71) 00:11:20.253 9501.289 - 9558.526: 74.4572% ( 68) 00:11:20.253 9558.526 - 9615.762: 74.9853% ( 72) 00:11:20.254 9615.762 - 9672.999: 75.6602% ( 92) 00:11:20.254 9672.999 - 9730.236: 76.2837% ( 85) 00:11:20.254 9730.236 - 9787.472: 76.8926% ( 83) 00:11:20.254 9787.472 - 9844.709: 77.5015% ( 83) 00:11:20.254 9844.709 - 9901.946: 78.1397% ( 87) 00:11:20.254 9901.946 - 9959.183: 78.7705% ( 86) 00:11:20.254 9959.183 - 10016.419: 79.4161% ( 88) 00:11:20.254 10016.419 - 10073.656: 80.0690% ( 89) 00:11:20.254 10073.656 - 10130.893: 80.6705% ( 82) 00:11:20.254 10130.893 - 10188.129: 81.2793% ( 83) 00:11:20.254 10188.129 - 10245.366: 81.8662% ( 80) 00:11:20.254 10245.366 - 10302.603: 82.4604% ( 81) 00:11:20.254 10302.603 - 10359.839: 83.0106% ( 75) 00:11:20.254 10359.839 - 10417.076: 83.5754% ( 77) 00:11:20.254 10417.076 - 10474.313: 84.1256% ( 75) 00:11:20.254 10474.313 - 10531.549: 84.6391% ( 70) 00:11:20.254 10531.549 - 10588.786: 85.0939% ( 62) 00:11:20.254 10588.786 - 10646.023: 85.4167% ( 44) 00:11:20.254 10646.023 - 10703.259: 85.6881% ( 37) 00:11:20.254 10703.259 - 10760.496: 85.9448% ( 35) 00:11:20.254 10760.496 - 10817.733: 86.1869% ( 33) 00:11:20.254 10817.733 - 10874.969: 86.4070% ( 30) 00:11:20.254 10874.969 - 10932.206: 86.6124% ( 28) 00:11:20.254 10932.206 - 10989.443: 86.8031% ( 26) 00:11:20.254 10989.443 - 11046.679: 86.9938% ( 26) 00:11:20.254 11046.679 - 11103.916: 87.1332% ( 19) 00:11:20.254 11103.916 - 11161.153: 87.3313% ( 27) 00:11:20.254 11161.153 - 11218.390: 87.5147% ( 25) 00:11:20.254 11218.390 - 11275.626: 87.6687% ( 21) 00:11:20.254 11275.626 - 11332.863: 87.8668% ( 27) 00:11:20.254 11332.863 - 11390.100: 88.0648% ( 27) 00:11:20.254 11390.100 - 11447.336: 88.2629% ( 27) 00:11:20.254 11447.336 - 11504.573: 88.4610% ( 27) 00:11:20.254 11504.573 - 11561.810: 88.6517% ( 26) 00:11:20.254 11561.810 - 11619.046: 88.8204% ( 23) 00:11:20.254 11619.046 - 11676.283: 88.9965% ( 24) 00:11:20.254 11676.283 - 11733.520: 89.1579% ( 22) 00:11:20.254 11733.520 - 11790.756: 89.2972% ( 19) 00:11:20.254 11790.756 - 11847.993: 89.4440% ( 20) 00:11:20.254 11847.993 - 11905.230: 89.5907% ( 20) 00:11:20.254 11905.230 - 11962.466: 89.7374% ( 20) 00:11:20.254 11962.466 - 12019.703: 89.8988% ( 22) 00:11:20.254 12019.703 - 12076.940: 90.0015% ( 14) 00:11:20.254 12076.940 - 12134.176: 90.1042% ( 14) 00:11:20.254 12134.176 - 12191.413: 90.1922% ( 12) 00:11:20.254 12191.413 - 12248.650: 90.2729% ( 11) 00:11:20.254 12248.650 - 12305.886: 90.3389% ( 9) 00:11:20.254 12305.886 - 12363.123: 90.4269% ( 12) 00:11:20.254 12363.123 - 12420.360: 90.5076% ( 11) 00:11:20.254 12420.360 - 12477.597: 90.5663% ( 8) 00:11:20.254 12477.597 - 12534.833: 90.6323% ( 9) 00:11:20.254 12534.833 - 12592.070: 90.6984% ( 9) 00:11:20.254 12592.070 - 12649.307: 90.7864% ( 12) 00:11:20.254 12649.307 - 12706.543: 90.8671% ( 11) 00:11:20.254 12706.543 - 12763.780: 90.9624% ( 13) 00:11:20.254 12763.780 - 12821.017: 91.0578% ( 13) 00:11:20.254 12821.017 - 12878.253: 91.1385% ( 11) 00:11:20.254 12878.253 - 12935.490: 91.2339% ( 13) 00:11:20.254 12935.490 - 12992.727: 91.3146% ( 11) 00:11:20.254 12992.727 - 13049.963: 91.3952% ( 11) 00:11:20.254 13049.963 - 13107.200: 91.5200% ( 17) 00:11:20.254 13107.200 - 13164.437: 91.6447% ( 17) 00:11:20.254 13164.437 - 13221.673: 91.7547% ( 15) 00:11:20.254 13221.673 - 13278.910: 91.8501% ( 13) 00:11:20.254 13278.910 - 13336.147: 91.9821% ( 18) 00:11:20.254 13336.147 - 13393.383: 92.1141% ( 18) 00:11:20.254 13393.383 - 13450.620: 92.2535% ( 19) 00:11:20.254 13450.620 - 13507.857: 92.4002% ( 20) 00:11:20.254 13507.857 - 13565.093: 92.5396% ( 19) 00:11:20.254 13565.093 - 13622.330: 92.6790% ( 19) 00:11:20.254 13622.330 - 13679.567: 92.8477% ( 23) 00:11:20.254 13679.567 - 13736.803: 92.9871% ( 19) 00:11:20.254 13736.803 - 13794.040: 93.1485% ( 22) 00:11:20.254 13794.040 - 13851.277: 93.2732% ( 17) 00:11:20.254 13851.277 - 13908.514: 93.4199% ( 20) 00:11:20.254 13908.514 - 13965.750: 93.5739% ( 21) 00:11:20.254 13965.750 - 14022.987: 93.7500% ( 24) 00:11:20.254 14022.987 - 14080.224: 93.9407% ( 26) 00:11:20.254 14080.224 - 14137.460: 94.1168% ( 24) 00:11:20.254 14137.460 - 14194.697: 94.2782% ( 22) 00:11:20.254 14194.697 - 14251.934: 94.4175% ( 19) 00:11:20.254 14251.934 - 14309.170: 94.5789% ( 22) 00:11:20.254 14309.170 - 14366.407: 94.7256% ( 20) 00:11:20.254 14366.407 - 14423.644: 94.8724% ( 20) 00:11:20.254 14423.644 - 14480.880: 94.9971% ( 17) 00:11:20.254 14480.880 - 14538.117: 95.1291% ( 18) 00:11:20.254 14538.117 - 14595.354: 95.2538% ( 17) 00:11:20.254 14595.354 - 14652.590: 95.3712% ( 16) 00:11:20.254 14652.590 - 14767.064: 95.5986% ( 31) 00:11:20.254 14767.064 - 14881.537: 95.7673% ( 23) 00:11:20.254 14881.537 - 14996.010: 95.9287% ( 22) 00:11:20.254 14996.010 - 15110.484: 96.1341% ( 28) 00:11:20.254 15110.484 - 15224.957: 96.2955% ( 22) 00:11:20.254 15224.957 - 15339.431: 96.4495% ( 21) 00:11:20.254 15339.431 - 15453.904: 96.6036% ( 21) 00:11:20.254 15453.904 - 15568.377: 96.7576% ( 21) 00:11:20.254 15568.377 - 15682.851: 96.8677% ( 15) 00:11:20.254 15682.851 - 15797.324: 96.9777% ( 15) 00:11:20.254 15797.324 - 15911.797: 97.0804% ( 14) 00:11:20.254 15911.797 - 16026.271: 97.1317% ( 7) 00:11:20.254 16026.271 - 16140.744: 97.1831% ( 7) 00:11:20.254 16140.744 - 16255.217: 97.1904% ( 1) 00:11:20.254 16255.217 - 16369.691: 97.2124% ( 3) 00:11:20.254 16369.691 - 16484.164: 97.2418% ( 4) 00:11:20.254 16484.164 - 16598.638: 97.2638% ( 3) 00:11:20.254 16598.638 - 16713.111: 97.2858% ( 3) 00:11:20.254 16713.111 - 16827.584: 97.3151% ( 4) 00:11:20.254 16827.584 - 16942.058: 97.3445% ( 4) 00:11:20.254 16942.058 - 17056.531: 97.4032% ( 8) 00:11:20.254 17056.531 - 17171.004: 97.4839% ( 11) 00:11:20.254 17171.004 - 17285.478: 97.5279% ( 6) 00:11:20.254 17285.478 - 17399.951: 97.5719% ( 6) 00:11:20.254 17399.951 - 17514.424: 97.6306% ( 8) 00:11:20.254 17514.424 - 17628.898: 97.7039% ( 10) 00:11:20.254 17628.898 - 17743.371: 97.7553% ( 7) 00:11:20.254 17743.371 - 17857.845: 97.8066% ( 7) 00:11:20.254 17857.845 - 17972.318: 97.8580% ( 7) 00:11:20.254 17972.318 - 18086.791: 97.9240% ( 9) 00:11:20.254 18086.791 - 18201.265: 98.0047% ( 11) 00:11:20.254 18201.265 - 18315.738: 98.0781% ( 10) 00:11:20.254 18315.738 - 18430.211: 98.1367% ( 8) 00:11:20.254 18430.211 - 18544.685: 98.2101% ( 10) 00:11:20.254 18544.685 - 18659.158: 98.2394% ( 4) 00:11:20.254 18659.158 - 18773.631: 98.2688% ( 4) 00:11:20.254 18773.631 - 18888.105: 98.2908% ( 3) 00:11:20.254 18888.105 - 19002.578: 98.3201% ( 4) 00:11:20.254 19002.578 - 19117.052: 98.3421% ( 3) 00:11:20.254 19117.052 - 19231.525: 98.3641% ( 3) 00:11:20.254 19231.525 - 19345.998: 98.3935% ( 4) 00:11:20.254 19345.998 - 19460.472: 98.4302% ( 5) 00:11:20.254 19460.472 - 19574.945: 98.4742% ( 6) 00:11:20.254 19574.945 - 19689.418: 98.5035% ( 4) 00:11:20.254 19689.418 - 19803.892: 98.5549% ( 7) 00:11:20.254 19803.892 - 19918.365: 98.5915% ( 5) 00:11:20.254 19918.365 - 20032.838: 98.6356% ( 6) 00:11:20.254 20032.838 - 20147.312: 98.6796% ( 6) 00:11:20.254 20147.312 - 20261.785: 98.7163% ( 5) 00:11:20.254 20261.785 - 20376.259: 98.7309% ( 2) 00:11:20.254 20376.259 - 20490.732: 98.7456% ( 2) 00:11:20.254 20490.732 - 20605.205: 98.7603% ( 2) 00:11:20.254 20605.205 - 20719.679: 98.7823% ( 3) 00:11:20.254 20719.679 - 20834.152: 98.7969% ( 2) 00:11:20.254 20834.152 - 20948.625: 98.8116% ( 2) 00:11:20.254 20948.625 - 21063.099: 98.8263% ( 2) 00:11:20.254 21063.099 - 21177.572: 98.8410% ( 2) 00:11:20.254 21177.572 - 21292.045: 98.8556% ( 2) 00:11:20.254 21292.045 - 21406.519: 98.8776% ( 3) 00:11:20.254 21406.519 - 21520.992: 98.8923% ( 2) 00:11:20.254 21520.992 - 21635.466: 98.9070% ( 2) 00:11:20.254 21635.466 - 21749.939: 98.9217% ( 2) 00:11:20.254 21749.939 - 21864.412: 98.9437% ( 3) 00:11:20.254 21864.412 - 21978.886: 98.9583% ( 2) 00:11:20.254 21978.886 - 22093.359: 98.9730% ( 2) 00:11:20.254 22093.359 - 22207.832: 98.9877% ( 2) 00:11:20.254 22207.832 - 22322.306: 99.0023% ( 2) 00:11:20.254 22322.306 - 22436.779: 99.0244% ( 3) 00:11:20.254 22436.779 - 22551.252: 99.0390% ( 2) 00:11:20.254 22551.252 - 22665.726: 99.0537% ( 2) 00:11:20.254 22665.726 - 22780.199: 99.0610% ( 1) 00:11:20.254 27702.554 - 27817.027: 99.0684% ( 1) 00:11:20.254 27817.027 - 27931.500: 99.0904% ( 3) 00:11:20.254 27931.500 - 28045.974: 99.1124% ( 3) 00:11:20.254 28045.974 - 28160.447: 99.1344% ( 3) 00:11:20.254 28160.447 - 28274.921: 99.1564% ( 3) 00:11:20.254 28274.921 - 28389.394: 99.1784% ( 3) 00:11:20.254 28389.394 - 28503.867: 99.1931% ( 2) 00:11:20.254 28503.867 - 28618.341: 99.2151% ( 3) 00:11:20.254 28618.341 - 28732.814: 99.2371% ( 3) 00:11:20.254 28732.814 - 28847.287: 99.2591% ( 3) 00:11:20.254 28847.287 - 28961.761: 99.2811% ( 3) 00:11:20.254 28961.761 - 29076.234: 99.3031% ( 3) 00:11:20.254 29076.234 - 29190.707: 99.3251% ( 3) 00:11:20.254 29190.707 - 29305.181: 99.3471% ( 3) 00:11:20.254 29305.181 - 29534.128: 99.3911% ( 6) 00:11:20.254 29534.128 - 29763.074: 99.4352% ( 6) 00:11:20.254 29763.074 - 29992.021: 99.4718% ( 5) 00:11:20.254 29992.021 - 30220.968: 99.5158% ( 6) 00:11:20.254 30220.968 - 30449.914: 99.5305% ( 2) 00:11:20.254 35715.689 - 35944.636: 99.5379% ( 1) 00:11:20.254 35944.636 - 36173.583: 99.5819% ( 6) 00:11:20.254 36173.583 - 36402.529: 99.6185% ( 5) 00:11:20.254 36402.529 - 36631.476: 99.6552% ( 5) 00:11:20.254 36631.476 - 36860.423: 99.6992% ( 6) 00:11:20.255 36860.423 - 37089.369: 99.7359% ( 5) 00:11:20.255 37089.369 - 37318.316: 99.7726% ( 5) 00:11:20.255 37318.316 - 37547.263: 99.8166% ( 6) 00:11:20.255 37547.263 - 37776.210: 99.8606% ( 6) 00:11:20.255 37776.210 - 38005.156: 99.9046% ( 6) 00:11:20.255 38005.156 - 38234.103: 99.9487% ( 6) 00:11:20.255 38234.103 - 38463.050: 99.9780% ( 4) 00:11:20.255 38463.050 - 38691.997: 100.0000% ( 3) 00:11:20.255 00:11:20.255 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:20.255 ============================================================================== 00:11:20.255 Range in us Cumulative IO count 00:11:20.255 6954.257 - 6982.875: 0.0293% ( 4) 00:11:20.255 6982.875 - 7011.493: 0.1027% ( 10) 00:11:20.255 7011.493 - 7040.112: 0.1981% ( 13) 00:11:20.255 7040.112 - 7068.730: 0.3228% ( 17) 00:11:20.255 7068.730 - 7097.348: 0.4695% ( 20) 00:11:20.255 7097.348 - 7125.967: 0.6382% ( 23) 00:11:20.255 7125.967 - 7154.585: 0.8143% ( 24) 00:11:20.255 7154.585 - 7183.203: 1.0197% ( 28) 00:11:20.255 7183.203 - 7211.822: 1.3058% ( 39) 00:11:20.255 7211.822 - 7240.440: 1.6285% ( 44) 00:11:20.255 7240.440 - 7269.059: 2.0320% ( 55) 00:11:20.255 7269.059 - 7297.677: 2.3548% ( 44) 00:11:20.255 7297.677 - 7326.295: 2.7802% ( 58) 00:11:20.255 7326.295 - 7383.532: 3.7119% ( 127) 00:11:20.255 7383.532 - 7440.769: 4.8636% ( 157) 00:11:20.255 7440.769 - 7498.005: 6.2720% ( 192) 00:11:20.255 7498.005 - 7555.242: 7.9299% ( 226) 00:11:20.255 7555.242 - 7612.479: 9.6244% ( 231) 00:11:20.255 7612.479 - 7669.715: 11.6197% ( 272) 00:11:20.255 7669.715 - 7726.952: 13.7691% ( 293) 00:11:20.255 7726.952 - 7784.189: 15.9404% ( 296) 00:11:20.255 7784.189 - 7841.425: 18.4492% ( 342) 00:11:20.255 7841.425 - 7898.662: 21.1194% ( 364) 00:11:20.255 7898.662 - 7955.899: 23.8923% ( 378) 00:11:20.255 7955.899 - 8013.135: 26.8266% ( 400) 00:11:20.255 8013.135 - 8070.372: 29.8122% ( 407) 00:11:20.255 8070.372 - 8127.609: 32.7978% ( 407) 00:11:20.255 8127.609 - 8184.845: 35.8201% ( 412) 00:11:20.255 8184.845 - 8242.082: 38.9525% ( 427) 00:11:20.255 8242.082 - 8299.319: 41.8941% ( 401) 00:11:20.255 8299.319 - 8356.555: 44.8724% ( 406) 00:11:20.255 8356.555 - 8413.792: 47.6819% ( 383) 00:11:20.255 8413.792 - 8471.029: 50.5502% ( 391) 00:11:20.255 8471.029 - 8528.266: 53.2277% ( 365) 00:11:20.255 8528.266 - 8585.502: 55.7658% ( 346) 00:11:20.255 8585.502 - 8642.739: 58.0766% ( 315) 00:11:20.255 8642.739 - 8699.976: 60.1159% ( 278) 00:11:20.255 8699.976 - 8757.212: 62.0085% ( 258) 00:11:20.255 8757.212 - 8814.449: 63.7471% ( 237) 00:11:20.255 8814.449 - 8871.686: 65.2729% ( 208) 00:11:20.255 8871.686 - 8928.922: 66.7254% ( 198) 00:11:20.255 8928.922 - 8986.159: 68.0604% ( 182) 00:11:20.255 8986.159 - 9043.396: 69.2708% ( 165) 00:11:20.255 9043.396 - 9100.632: 70.2685% ( 136) 00:11:20.255 9100.632 - 9157.869: 71.1268% ( 117) 00:11:20.255 9157.869 - 9215.106: 71.8457% ( 98) 00:11:20.255 9215.106 - 9272.342: 72.4985% ( 89) 00:11:20.255 9272.342 - 9329.579: 73.0560% ( 76) 00:11:20.255 9329.579 - 9386.816: 73.5035% ( 61) 00:11:20.255 9386.816 - 9444.052: 73.9143% ( 56) 00:11:20.255 9444.052 - 9501.289: 74.3838% ( 64) 00:11:20.255 9501.289 - 9558.526: 74.8239% ( 60) 00:11:20.255 9558.526 - 9615.762: 75.3521% ( 72) 00:11:20.255 9615.762 - 9672.999: 75.9903% ( 87) 00:11:20.255 9672.999 - 9730.236: 76.6212% ( 86) 00:11:20.255 9730.236 - 9787.472: 77.2300% ( 83) 00:11:20.255 9787.472 - 9844.709: 77.8903% ( 90) 00:11:20.255 9844.709 - 9901.946: 78.5138% ( 85) 00:11:20.255 9901.946 - 9959.183: 79.1153% ( 82) 00:11:20.255 9959.183 - 10016.419: 79.7682% ( 89) 00:11:20.255 10016.419 - 10073.656: 80.4064% ( 87) 00:11:20.255 10073.656 - 10130.893: 81.0006% ( 81) 00:11:20.255 10130.893 - 10188.129: 81.5874% ( 80) 00:11:20.255 10188.129 - 10245.366: 82.1670% ( 79) 00:11:20.255 10245.366 - 10302.603: 82.7758% ( 83) 00:11:20.255 10302.603 - 10359.839: 83.3773% ( 82) 00:11:20.255 10359.839 - 10417.076: 83.9569% ( 79) 00:11:20.255 10417.076 - 10474.313: 84.4997% ( 74) 00:11:20.255 10474.313 - 10531.549: 84.9692% ( 64) 00:11:20.255 10531.549 - 10588.786: 85.4460% ( 65) 00:11:20.255 10588.786 - 10646.023: 85.8275% ( 52) 00:11:20.255 10646.023 - 10703.259: 86.1429% ( 43) 00:11:20.255 10703.259 - 10760.496: 86.3630% ( 30) 00:11:20.255 10760.496 - 10817.733: 86.5757% ( 29) 00:11:20.255 10817.733 - 10874.969: 86.7738% ( 27) 00:11:20.255 10874.969 - 10932.206: 86.9792% ( 28) 00:11:20.255 10932.206 - 10989.443: 87.1626% ( 25) 00:11:20.255 10989.443 - 11046.679: 87.3386% ( 24) 00:11:20.255 11046.679 - 11103.916: 87.4927% ( 21) 00:11:20.255 11103.916 - 11161.153: 87.6467% ( 21) 00:11:20.255 11161.153 - 11218.390: 87.7861% ( 19) 00:11:20.255 11218.390 - 11275.626: 87.9108% ( 17) 00:11:20.255 11275.626 - 11332.863: 88.0282% ( 16) 00:11:20.255 11332.863 - 11390.100: 88.1382% ( 15) 00:11:20.255 11390.100 - 11447.336: 88.2629% ( 17) 00:11:20.255 11447.336 - 11504.573: 88.3876% ( 17) 00:11:20.255 11504.573 - 11561.810: 88.5050% ( 16) 00:11:20.255 11561.810 - 11619.046: 88.5930% ( 12) 00:11:20.255 11619.046 - 11676.283: 88.7397% ( 20) 00:11:20.255 11676.283 - 11733.520: 88.8864% ( 20) 00:11:20.255 11733.520 - 11790.756: 89.0112% ( 17) 00:11:20.255 11790.756 - 11847.993: 89.1505% ( 19) 00:11:20.255 11847.993 - 11905.230: 89.2826% ( 18) 00:11:20.255 11905.230 - 11962.466: 89.4146% ( 18) 00:11:20.255 11962.466 - 12019.703: 89.5320% ( 16) 00:11:20.255 12019.703 - 12076.940: 89.6714% ( 19) 00:11:20.255 12076.940 - 12134.176: 89.7961% ( 17) 00:11:20.255 12134.176 - 12191.413: 89.9208% ( 17) 00:11:20.255 12191.413 - 12248.650: 90.0528% ( 18) 00:11:20.255 12248.650 - 12305.886: 90.1849% ( 18) 00:11:20.255 12305.886 - 12363.123: 90.3316% ( 20) 00:11:20.255 12363.123 - 12420.360: 90.4563% ( 17) 00:11:20.255 12420.360 - 12477.597: 90.5663% ( 15) 00:11:20.255 12477.597 - 12534.833: 90.7057% ( 19) 00:11:20.255 12534.833 - 12592.070: 90.8084% ( 14) 00:11:20.255 12592.070 - 12649.307: 90.8891% ( 11) 00:11:20.255 12649.307 - 12706.543: 91.0285% ( 19) 00:11:20.255 12706.543 - 12763.780: 91.1458% ( 16) 00:11:20.255 12763.780 - 12821.017: 91.2559% ( 15) 00:11:20.255 12821.017 - 12878.253: 91.3512% ( 13) 00:11:20.255 12878.253 - 12935.490: 91.4613% ( 15) 00:11:20.255 12935.490 - 12992.727: 91.5713% ( 15) 00:11:20.255 12992.727 - 13049.963: 91.7107% ( 19) 00:11:20.255 13049.963 - 13107.200: 91.8134% ( 14) 00:11:20.255 13107.200 - 13164.437: 91.8867% ( 10) 00:11:20.255 13164.437 - 13221.673: 91.9894% ( 14) 00:11:20.255 13221.673 - 13278.910: 92.0921% ( 14) 00:11:20.255 13278.910 - 13336.147: 92.2315% ( 19) 00:11:20.255 13336.147 - 13393.383: 92.3782% ( 20) 00:11:20.255 13393.383 - 13450.620: 92.4809% ( 14) 00:11:20.255 13450.620 - 13507.857: 92.6056% ( 17) 00:11:20.255 13507.857 - 13565.093: 92.7523% ( 20) 00:11:20.255 13565.093 - 13622.330: 92.8991% ( 20) 00:11:20.255 13622.330 - 13679.567: 93.0678% ( 23) 00:11:20.255 13679.567 - 13736.803: 93.2218% ( 21) 00:11:20.255 13736.803 - 13794.040: 93.3465% ( 17) 00:11:20.255 13794.040 - 13851.277: 93.4712% ( 17) 00:11:20.255 13851.277 - 13908.514: 93.5886% ( 16) 00:11:20.255 13908.514 - 13965.750: 93.7133% ( 17) 00:11:20.255 13965.750 - 14022.987: 93.8380% ( 17) 00:11:20.255 14022.987 - 14080.224: 93.9994% ( 22) 00:11:20.255 14080.224 - 14137.460: 94.1608% ( 22) 00:11:20.255 14137.460 - 14194.697: 94.2708% ( 15) 00:11:20.255 14194.697 - 14251.934: 94.3955% ( 17) 00:11:20.255 14251.934 - 14309.170: 94.5276% ( 18) 00:11:20.255 14309.170 - 14366.407: 94.6376% ( 15) 00:11:20.255 14366.407 - 14423.644: 94.7550% ( 16) 00:11:20.255 14423.644 - 14480.880: 94.8650% ( 15) 00:11:20.255 14480.880 - 14538.117: 94.9897% ( 17) 00:11:20.255 14538.117 - 14595.354: 95.0998% ( 15) 00:11:20.255 14595.354 - 14652.590: 95.2318% ( 18) 00:11:20.255 14652.590 - 14767.064: 95.5106% ( 38) 00:11:20.255 14767.064 - 14881.537: 95.7526% ( 33) 00:11:20.255 14881.537 - 14996.010: 95.9947% ( 33) 00:11:20.255 14996.010 - 15110.484: 96.1341% ( 19) 00:11:20.255 15110.484 - 15224.957: 96.2441% ( 15) 00:11:20.255 15224.957 - 15339.431: 96.4275% ( 25) 00:11:20.255 15339.431 - 15453.904: 96.5449% ( 16) 00:11:20.255 15453.904 - 15568.377: 96.6916% ( 20) 00:11:20.255 15568.377 - 15682.851: 96.8090% ( 16) 00:11:20.255 15682.851 - 15797.324: 96.8897% ( 11) 00:11:20.255 15797.324 - 15911.797: 96.9850% ( 13) 00:11:20.255 15911.797 - 16026.271: 97.0951% ( 15) 00:11:20.255 16026.271 - 16140.744: 97.1538% ( 8) 00:11:20.255 16140.744 - 16255.217: 97.2198% ( 9) 00:11:20.255 16255.217 - 16369.691: 97.2931% ( 10) 00:11:20.255 16369.691 - 16484.164: 97.3151% ( 3) 00:11:20.255 16484.164 - 16598.638: 97.3371% ( 3) 00:11:20.255 16598.638 - 16713.111: 97.3665% ( 4) 00:11:20.255 16713.111 - 16827.584: 97.3885% ( 3) 00:11:20.255 16827.584 - 16942.058: 97.4105% ( 3) 00:11:20.255 16942.058 - 17056.531: 97.4398% ( 4) 00:11:20.255 17056.531 - 17171.004: 97.4985% ( 8) 00:11:20.255 17171.004 - 17285.478: 97.5572% ( 8) 00:11:20.255 17285.478 - 17399.951: 97.6232% ( 9) 00:11:20.255 17399.951 - 17514.424: 97.6746% ( 7) 00:11:20.255 17514.424 - 17628.898: 97.7333% ( 8) 00:11:20.255 17628.898 - 17743.371: 97.7920% ( 8) 00:11:20.255 17743.371 - 17857.845: 97.8506% ( 8) 00:11:20.255 17857.845 - 17972.318: 97.9093% ( 8) 00:11:20.255 17972.318 - 18086.791: 97.9607% ( 7) 00:11:20.255 18086.791 - 18201.265: 97.9900% ( 4) 00:11:20.255 18201.265 - 18315.738: 98.0267% ( 5) 00:11:20.255 18315.738 - 18430.211: 98.0707% ( 6) 00:11:20.255 18430.211 - 18544.685: 98.1221% ( 7) 00:11:20.256 18544.685 - 18659.158: 98.1808% ( 8) 00:11:20.256 18659.158 - 18773.631: 98.2028% ( 3) 00:11:20.256 18773.631 - 18888.105: 98.2321% ( 4) 00:11:20.256 18888.105 - 19002.578: 98.2541% ( 3) 00:11:20.256 19002.578 - 19117.052: 98.2761% ( 3) 00:11:20.256 19117.052 - 19231.525: 98.3055% ( 4) 00:11:20.256 19231.525 - 19345.998: 98.3348% ( 4) 00:11:20.256 19345.998 - 19460.472: 98.3641% ( 4) 00:11:20.256 19460.472 - 19574.945: 98.4082% ( 6) 00:11:20.256 19574.945 - 19689.418: 98.4448% ( 5) 00:11:20.256 19689.418 - 19803.892: 98.4815% ( 5) 00:11:20.256 19803.892 - 19918.365: 98.5255% ( 6) 00:11:20.256 19918.365 - 20032.838: 98.5622% ( 5) 00:11:20.256 20032.838 - 20147.312: 98.6062% ( 6) 00:11:20.256 20147.312 - 20261.785: 98.6429% ( 5) 00:11:20.256 20261.785 - 20376.259: 98.6942% ( 7) 00:11:20.256 20376.259 - 20490.732: 98.7309% ( 5) 00:11:20.256 20490.732 - 20605.205: 98.7676% ( 5) 00:11:20.256 20605.205 - 20719.679: 98.7896% ( 3) 00:11:20.256 20719.679 - 20834.152: 98.8043% ( 2) 00:11:20.256 20834.152 - 20948.625: 98.8190% ( 2) 00:11:20.256 20948.625 - 21063.099: 98.8336% ( 2) 00:11:20.256 21063.099 - 21177.572: 98.8483% ( 2) 00:11:20.256 21177.572 - 21292.045: 98.8630% ( 2) 00:11:20.256 21292.045 - 21406.519: 98.8776% ( 2) 00:11:20.256 21406.519 - 21520.992: 98.8923% ( 2) 00:11:20.256 21520.992 - 21635.466: 98.9143% ( 3) 00:11:20.256 21635.466 - 21749.939: 98.9290% ( 2) 00:11:20.256 21749.939 - 21864.412: 98.9437% ( 2) 00:11:20.256 21864.412 - 21978.886: 98.9583% ( 2) 00:11:20.256 21978.886 - 22093.359: 98.9730% ( 2) 00:11:20.256 22093.359 - 22207.832: 98.9950% ( 3) 00:11:20.256 22207.832 - 22322.306: 99.0097% ( 2) 00:11:20.256 22322.306 - 22436.779: 99.0244% ( 2) 00:11:20.256 22436.779 - 22551.252: 99.0390% ( 2) 00:11:20.256 22551.252 - 22665.726: 99.0610% ( 3) 00:11:20.256 25527.560 - 25642.033: 99.0757% ( 2) 00:11:20.256 25642.033 - 25756.507: 99.0904% ( 2) 00:11:20.256 25756.507 - 25870.980: 99.1124% ( 3) 00:11:20.256 25870.980 - 25985.453: 99.1344% ( 3) 00:11:20.256 25985.453 - 26099.927: 99.1491% ( 2) 00:11:20.256 26099.927 - 26214.400: 99.1711% ( 3) 00:11:20.256 26214.400 - 26328.873: 99.2004% ( 4) 00:11:20.256 26328.873 - 26443.347: 99.2151% ( 2) 00:11:20.256 26443.347 - 26557.820: 99.2444% ( 4) 00:11:20.256 26557.820 - 26672.293: 99.2591% ( 2) 00:11:20.256 26672.293 - 26786.767: 99.2884% ( 4) 00:11:20.256 26786.767 - 26901.240: 99.3104% ( 3) 00:11:20.256 26901.240 - 27015.714: 99.3325% ( 3) 00:11:20.256 27015.714 - 27130.187: 99.3545% ( 3) 00:11:20.256 27130.187 - 27244.660: 99.3691% ( 2) 00:11:20.256 27244.660 - 27359.134: 99.3911% ( 3) 00:11:20.256 27359.134 - 27473.607: 99.4131% ( 3) 00:11:20.256 27473.607 - 27588.080: 99.4352% ( 3) 00:11:20.256 27588.080 - 27702.554: 99.4572% ( 3) 00:11:20.256 27702.554 - 27817.027: 99.4792% ( 3) 00:11:20.256 27817.027 - 27931.500: 99.5012% ( 3) 00:11:20.256 27931.500 - 28045.974: 99.5232% ( 3) 00:11:20.256 28045.974 - 28160.447: 99.5305% ( 1) 00:11:20.256 34570.955 - 34799.902: 99.5745% ( 6) 00:11:20.256 34799.902 - 35028.849: 99.6112% ( 5) 00:11:20.256 35028.849 - 35257.796: 99.6479% ( 5) 00:11:20.256 35257.796 - 35486.742: 99.6919% ( 6) 00:11:20.256 35486.742 - 35715.689: 99.7359% ( 6) 00:11:20.256 35715.689 - 35944.636: 99.7726% ( 5) 00:11:20.256 35944.636 - 36173.583: 99.8166% ( 6) 00:11:20.256 36173.583 - 36402.529: 99.8606% ( 6) 00:11:20.256 36402.529 - 36631.476: 99.8973% ( 5) 00:11:20.256 36631.476 - 36860.423: 99.9413% ( 6) 00:11:20.256 36860.423 - 37089.369: 99.9780% ( 5) 00:11:20.256 37089.369 - 37318.316: 100.0000% ( 3) 00:11:20.256 00:11:20.256 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:20.256 ============================================================================== 00:11:20.256 Range in us Cumulative IO count 00:11:20.256 6925.638 - 6954.257: 0.0073% ( 1) 00:11:20.256 6954.257 - 6982.875: 0.0440% ( 5) 00:11:20.256 6982.875 - 7011.493: 0.1027% ( 8) 00:11:20.256 7011.493 - 7040.112: 0.1907% ( 12) 00:11:20.256 7040.112 - 7068.730: 0.3081% ( 16) 00:11:20.256 7068.730 - 7097.348: 0.4842% ( 24) 00:11:20.256 7097.348 - 7125.967: 0.6309% ( 20) 00:11:20.256 7125.967 - 7154.585: 0.8069% ( 24) 00:11:20.256 7154.585 - 7183.203: 0.9756% ( 23) 00:11:20.256 7183.203 - 7211.822: 1.2324% ( 35) 00:11:20.256 7211.822 - 7240.440: 1.5405% ( 42) 00:11:20.256 7240.440 - 7269.059: 1.8486% ( 42) 00:11:20.256 7269.059 - 7297.677: 2.2374% ( 53) 00:11:20.256 7297.677 - 7326.295: 2.6629% ( 58) 00:11:20.256 7326.295 - 7383.532: 3.6898% ( 140) 00:11:20.256 7383.532 - 7440.769: 4.8342% ( 156) 00:11:20.256 7440.769 - 7498.005: 6.1253% ( 176) 00:11:20.256 7498.005 - 7555.242: 7.6218% ( 204) 00:11:20.256 7555.242 - 7612.479: 9.3310% ( 233) 00:11:20.256 7612.479 - 7669.715: 11.3116% ( 270) 00:11:20.256 7669.715 - 7726.952: 13.4243% ( 288) 00:11:20.256 7726.952 - 7784.189: 15.8597% ( 332) 00:11:20.256 7784.189 - 7841.425: 18.3759% ( 343) 00:11:20.256 7841.425 - 7898.662: 21.1194% ( 374) 00:11:20.256 7898.662 - 7955.899: 23.9877% ( 391) 00:11:20.256 7955.899 - 8013.135: 26.8779% ( 394) 00:11:20.256 8013.135 - 8070.372: 29.9002% ( 412) 00:11:20.256 8070.372 - 8127.609: 32.8785% ( 406) 00:11:20.256 8127.609 - 8184.845: 35.9522% ( 419) 00:11:20.256 8184.845 - 8242.082: 39.0185% ( 418) 00:11:20.256 8242.082 - 8299.319: 42.0261% ( 410) 00:11:20.256 8299.319 - 8356.555: 44.9897% ( 404) 00:11:20.256 8356.555 - 8413.792: 47.8727% ( 393) 00:11:20.256 8413.792 - 8471.029: 50.7262% ( 389) 00:11:20.256 8471.029 - 8528.266: 53.3084% ( 352) 00:11:20.256 8528.266 - 8585.502: 55.6925% ( 325) 00:11:20.256 8585.502 - 8642.739: 57.8785% ( 298) 00:11:20.256 8642.739 - 8699.976: 59.8005% ( 262) 00:11:20.256 8699.976 - 8757.212: 61.6564% ( 253) 00:11:20.256 8757.212 - 8814.449: 63.3363% ( 229) 00:11:20.256 8814.449 - 8871.686: 65.0088% ( 228) 00:11:20.256 8871.686 - 8928.922: 66.4026% ( 190) 00:11:20.256 8928.922 - 8986.159: 67.7890% ( 189) 00:11:20.256 8986.159 - 9043.396: 68.9774% ( 162) 00:11:20.256 9043.396 - 9100.632: 70.0558% ( 147) 00:11:20.256 9100.632 - 9157.869: 70.8553% ( 109) 00:11:20.256 9157.869 - 9215.106: 71.5962% ( 101) 00:11:20.256 9215.106 - 9272.342: 72.2565% ( 90) 00:11:20.256 9272.342 - 9329.579: 72.8286% ( 78) 00:11:20.256 9329.579 - 9386.816: 73.4302% ( 82) 00:11:20.256 9386.816 - 9444.052: 73.8630% ( 59) 00:11:20.256 9444.052 - 9501.289: 74.3691% ( 69) 00:11:20.256 9501.289 - 9558.526: 74.8606% ( 67) 00:11:20.256 9558.526 - 9615.762: 75.4108% ( 75) 00:11:20.256 9615.762 - 9672.999: 76.0343% ( 85) 00:11:20.256 9672.999 - 9730.236: 76.6359% ( 82) 00:11:20.256 9730.236 - 9787.472: 77.2814% ( 88) 00:11:20.256 9787.472 - 9844.709: 77.9343% ( 89) 00:11:20.256 9844.709 - 9901.946: 78.5651% ( 86) 00:11:20.256 9901.946 - 9959.183: 79.2254% ( 90) 00:11:20.256 9959.183 - 10016.419: 79.8489% ( 85) 00:11:20.256 10016.419 - 10073.656: 80.5678% ( 98) 00:11:20.256 10073.656 - 10130.893: 81.2060% ( 87) 00:11:20.256 10130.893 - 10188.129: 81.8295% ( 85) 00:11:20.256 10188.129 - 10245.366: 82.4677% ( 87) 00:11:20.256 10245.366 - 10302.603: 83.0546% ( 80) 00:11:20.256 10302.603 - 10359.839: 83.6268% ( 78) 00:11:20.257 10359.839 - 10417.076: 84.2136% ( 80) 00:11:20.257 10417.076 - 10474.313: 84.7785% ( 77) 00:11:20.257 10474.313 - 10531.549: 85.2553% ( 65) 00:11:20.257 10531.549 - 10588.786: 85.7028% ( 61) 00:11:20.257 10588.786 - 10646.023: 86.0475% ( 47) 00:11:20.257 10646.023 - 10703.259: 86.3263% ( 38) 00:11:20.257 10703.259 - 10760.496: 86.5684% ( 33) 00:11:20.257 10760.496 - 10817.733: 86.7738% ( 28) 00:11:20.257 10817.733 - 10874.969: 86.9352% ( 22) 00:11:20.257 10874.969 - 10932.206: 87.1112% ( 24) 00:11:20.257 10932.206 - 10989.443: 87.2946% ( 25) 00:11:20.257 10989.443 - 11046.679: 87.4927% ( 27) 00:11:20.257 11046.679 - 11103.916: 87.6100% ( 16) 00:11:20.257 11103.916 - 11161.153: 87.8008% ( 26) 00:11:20.257 11161.153 - 11218.390: 87.9695% ( 23) 00:11:20.257 11218.390 - 11275.626: 88.1309% ( 22) 00:11:20.257 11275.626 - 11332.863: 88.2996% ( 23) 00:11:20.257 11332.863 - 11390.100: 88.4536% ( 21) 00:11:20.257 11390.100 - 11447.336: 88.6004% ( 20) 00:11:20.257 11447.336 - 11504.573: 88.7324% ( 18) 00:11:20.257 11504.573 - 11561.810: 88.8644% ( 18) 00:11:20.257 11561.810 - 11619.046: 88.9965% ( 18) 00:11:20.257 11619.046 - 11676.283: 89.1359% ( 19) 00:11:20.257 11676.283 - 11733.520: 89.2679% ( 18) 00:11:20.257 11733.520 - 11790.756: 89.3706% ( 14) 00:11:20.257 11790.756 - 11847.993: 89.5100% ( 19) 00:11:20.257 11847.993 - 11905.230: 89.6127% ( 14) 00:11:20.257 11905.230 - 11962.466: 89.7447% ( 18) 00:11:20.257 11962.466 - 12019.703: 89.8548% ( 15) 00:11:20.257 12019.703 - 12076.940: 89.9575% ( 14) 00:11:20.257 12076.940 - 12134.176: 90.0822% ( 17) 00:11:20.257 12134.176 - 12191.413: 90.1629% ( 11) 00:11:20.257 12191.413 - 12248.650: 90.2582% ( 13) 00:11:20.257 12248.650 - 12305.886: 90.3169% ( 8) 00:11:20.257 12305.886 - 12363.123: 90.3683% ( 7) 00:11:20.257 12363.123 - 12420.360: 90.4196% ( 7) 00:11:20.257 12420.360 - 12477.597: 90.4783% ( 8) 00:11:20.257 12477.597 - 12534.833: 90.5516% ( 10) 00:11:20.257 12534.833 - 12592.070: 90.6177% ( 9) 00:11:20.257 12592.070 - 12649.307: 90.6837% ( 9) 00:11:20.257 12649.307 - 12706.543: 90.7570% ( 10) 00:11:20.257 12706.543 - 12763.780: 90.8377% ( 11) 00:11:20.257 12763.780 - 12821.017: 90.9551% ( 16) 00:11:20.257 12821.017 - 12878.253: 91.0505% ( 13) 00:11:20.257 12878.253 - 12935.490: 91.1605% ( 15) 00:11:20.257 12935.490 - 12992.727: 91.2632% ( 14) 00:11:20.257 12992.727 - 13049.963: 91.3366% ( 10) 00:11:20.257 13049.963 - 13107.200: 91.4319% ( 13) 00:11:20.257 13107.200 - 13164.437: 91.5273% ( 13) 00:11:20.257 13164.437 - 13221.673: 91.6960% ( 23) 00:11:20.257 13221.673 - 13278.910: 91.8427% ( 20) 00:11:20.257 13278.910 - 13336.147: 91.9528% ( 15) 00:11:20.257 13336.147 - 13393.383: 92.0995% ( 20) 00:11:20.257 13393.383 - 13450.620: 92.2462% ( 20) 00:11:20.257 13450.620 - 13507.857: 92.4002% ( 21) 00:11:20.257 13507.857 - 13565.093: 92.5469% ( 20) 00:11:20.257 13565.093 - 13622.330: 92.6863% ( 19) 00:11:20.257 13622.330 - 13679.567: 92.8257% ( 19) 00:11:20.257 13679.567 - 13736.803: 92.9431% ( 16) 00:11:20.257 13736.803 - 13794.040: 93.0604% ( 16) 00:11:20.257 13794.040 - 13851.277: 93.1925% ( 18) 00:11:20.257 13851.277 - 13908.514: 93.3172% ( 17) 00:11:20.257 13908.514 - 13965.750: 93.4859% ( 23) 00:11:20.257 13965.750 - 14022.987: 93.6620% ( 24) 00:11:20.257 14022.987 - 14080.224: 93.8307% ( 23) 00:11:20.257 14080.224 - 14137.460: 93.9701% ( 19) 00:11:20.257 14137.460 - 14194.697: 94.1315% ( 22) 00:11:20.257 14194.697 - 14251.934: 94.2782% ( 20) 00:11:20.257 14251.934 - 14309.170: 94.4029% ( 17) 00:11:20.257 14309.170 - 14366.407: 94.5202% ( 16) 00:11:20.257 14366.407 - 14423.644: 94.6303% ( 15) 00:11:20.257 14423.644 - 14480.880: 94.7183% ( 12) 00:11:20.257 14480.880 - 14538.117: 94.8137% ( 13) 00:11:20.257 14538.117 - 14595.354: 94.9090% ( 13) 00:11:20.257 14595.354 - 14652.590: 95.0117% ( 14) 00:11:20.257 14652.590 - 14767.064: 95.2538% ( 33) 00:11:20.257 14767.064 - 14881.537: 95.4812% ( 31) 00:11:20.257 14881.537 - 14996.010: 95.6426% ( 22) 00:11:20.257 14996.010 - 15110.484: 95.8040% ( 22) 00:11:20.257 15110.484 - 15224.957: 95.9580% ( 21) 00:11:20.257 15224.957 - 15339.431: 96.1341% ( 24) 00:11:20.257 15339.431 - 15453.904: 96.3322% ( 27) 00:11:20.257 15453.904 - 15568.377: 96.5376% ( 28) 00:11:20.257 15568.377 - 15682.851: 96.7356% ( 27) 00:11:20.257 15682.851 - 15797.324: 96.9557% ( 30) 00:11:20.257 15797.324 - 15911.797: 97.0877% ( 18) 00:11:20.257 15911.797 - 16026.271: 97.1978% ( 15) 00:11:20.257 16026.271 - 16140.744: 97.3005% ( 14) 00:11:20.257 16140.744 - 16255.217: 97.3885% ( 12) 00:11:20.257 16255.217 - 16369.691: 97.4545% ( 9) 00:11:20.257 16369.691 - 16484.164: 97.4912% ( 5) 00:11:20.257 16484.164 - 16598.638: 97.5205% ( 4) 00:11:20.257 16598.638 - 16713.111: 97.5425% ( 3) 00:11:20.257 16713.111 - 16827.584: 97.5719% ( 4) 00:11:20.257 16827.584 - 16942.058: 97.5939% ( 3) 00:11:20.257 16942.058 - 17056.531: 97.6159% ( 3) 00:11:20.257 17056.531 - 17171.004: 97.6452% ( 4) 00:11:20.257 17171.004 - 17285.478: 97.6526% ( 1) 00:11:20.257 17514.424 - 17628.898: 97.6673% ( 2) 00:11:20.257 17628.898 - 17743.371: 97.6966% ( 4) 00:11:20.257 17743.371 - 17857.845: 97.7333% ( 5) 00:11:20.257 17857.845 - 17972.318: 97.7700% ( 5) 00:11:20.257 17972.318 - 18086.791: 97.8213% ( 7) 00:11:20.257 18086.791 - 18201.265: 97.8727% ( 7) 00:11:20.257 18201.265 - 18315.738: 97.9460% ( 10) 00:11:20.257 18315.738 - 18430.211: 97.9974% ( 7) 00:11:20.257 18430.211 - 18544.685: 98.0560% ( 8) 00:11:20.257 18544.685 - 18659.158: 98.1074% ( 7) 00:11:20.257 18659.158 - 18773.631: 98.1661% ( 8) 00:11:20.257 18773.631 - 18888.105: 98.2321% ( 9) 00:11:20.257 18888.105 - 19002.578: 98.2835% ( 7) 00:11:20.257 19002.578 - 19117.052: 98.3421% ( 8) 00:11:20.257 19117.052 - 19231.525: 98.3788% ( 5) 00:11:20.257 19231.525 - 19345.998: 98.4008% ( 3) 00:11:20.257 19345.998 - 19460.472: 98.4375% ( 5) 00:11:20.257 19460.472 - 19574.945: 98.4815% ( 6) 00:11:20.257 19574.945 - 19689.418: 98.5182% ( 5) 00:11:20.257 19689.418 - 19803.892: 98.5549% ( 5) 00:11:20.257 19803.892 - 19918.365: 98.6062% ( 7) 00:11:20.257 19918.365 - 20032.838: 98.6429% ( 5) 00:11:20.257 20032.838 - 20147.312: 98.6796% ( 5) 00:11:20.257 20147.312 - 20261.785: 98.7163% ( 5) 00:11:20.257 20261.785 - 20376.259: 98.7383% ( 3) 00:11:20.257 20376.259 - 20490.732: 98.7529% ( 2) 00:11:20.257 20490.732 - 20605.205: 98.7749% ( 3) 00:11:20.257 20605.205 - 20719.679: 98.7896% ( 2) 00:11:20.257 20719.679 - 20834.152: 98.8043% ( 2) 00:11:20.257 20834.152 - 20948.625: 98.8190% ( 2) 00:11:20.257 20948.625 - 21063.099: 98.8336% ( 2) 00:11:20.257 21063.099 - 21177.572: 98.8556% ( 3) 00:11:20.257 21177.572 - 21292.045: 98.8630% ( 1) 00:11:20.257 21292.045 - 21406.519: 98.8850% ( 3) 00:11:20.257 21406.519 - 21520.992: 98.8996% ( 2) 00:11:20.257 21520.992 - 21635.466: 98.9143% ( 2) 00:11:20.257 21635.466 - 21749.939: 98.9290% ( 2) 00:11:20.257 21749.939 - 21864.412: 98.9437% ( 2) 00:11:20.257 21864.412 - 21978.886: 98.9583% ( 2) 00:11:20.257 21978.886 - 22093.359: 98.9657% ( 1) 00:11:20.257 22093.359 - 22207.832: 98.9950% ( 4) 00:11:20.257 22207.832 - 22322.306: 99.0244% ( 4) 00:11:20.257 22322.306 - 22436.779: 99.0464% ( 3) 00:11:20.257 22436.779 - 22551.252: 99.0610% ( 2) 00:11:20.257 22780.199 - 22894.672: 99.0830% ( 3) 00:11:20.257 22894.672 - 23009.146: 99.1050% ( 3) 00:11:20.257 23009.146 - 23123.619: 99.1271% ( 3) 00:11:20.257 23123.619 - 23238.093: 99.1417% ( 2) 00:11:20.257 23238.093 - 23352.566: 99.1637% ( 3) 00:11:20.257 23352.566 - 23467.039: 99.1931% ( 4) 00:11:20.257 23467.039 - 23581.513: 99.2151% ( 3) 00:11:20.257 23581.513 - 23695.986: 99.2371% ( 3) 00:11:20.257 23695.986 - 23810.459: 99.2591% ( 3) 00:11:20.257 23810.459 - 23924.933: 99.2811% ( 3) 00:11:20.257 23924.933 - 24039.406: 99.3031% ( 3) 00:11:20.257 24039.406 - 24153.879: 99.3251% ( 3) 00:11:20.257 24153.879 - 24268.353: 99.3398% ( 2) 00:11:20.257 24268.353 - 24382.826: 99.3618% ( 3) 00:11:20.257 24382.826 - 24497.300: 99.3838% ( 3) 00:11:20.257 24497.300 - 24611.773: 99.3985% ( 2) 00:11:20.257 24611.773 - 24726.246: 99.4131% ( 2) 00:11:20.257 24726.246 - 24840.720: 99.4352% ( 3) 00:11:20.257 24840.720 - 24955.193: 99.4572% ( 3) 00:11:20.257 24955.193 - 25069.666: 99.4718% ( 2) 00:11:20.257 25069.666 - 25184.140: 99.4938% ( 3) 00:11:20.257 25184.140 - 25298.613: 99.5158% ( 3) 00:11:20.257 25298.613 - 25413.086: 99.5305% ( 2) 00:11:20.257 32281.488 - 32510.435: 99.5599% ( 4) 00:11:20.257 32510.435 - 32739.382: 99.5965% ( 5) 00:11:20.257 32739.382 - 32968.328: 99.6406% ( 6) 00:11:20.257 32968.328 - 33197.275: 99.6772% ( 5) 00:11:20.257 33197.275 - 33426.222: 99.7212% ( 6) 00:11:20.257 33426.222 - 33655.169: 99.7653% ( 6) 00:11:20.257 33655.169 - 33884.115: 99.8019% ( 5) 00:11:20.257 33884.115 - 34113.062: 99.8460% ( 6) 00:11:20.257 34113.062 - 34342.009: 99.8826% ( 5) 00:11:20.257 34342.009 - 34570.955: 99.9266% ( 6) 00:11:20.257 34570.955 - 34799.902: 99.9707% ( 6) 00:11:20.257 34799.902 - 35028.849: 100.0000% ( 4) 00:11:20.257 00:11:20.257 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:20.257 ============================================================================== 00:11:20.257 Range in us Cumulative IO count 00:11:20.257 6925.638 - 6954.257: 0.0073% ( 1) 00:11:20.257 6954.257 - 6982.875: 0.0440% ( 5) 00:11:20.257 6982.875 - 7011.493: 0.0880% ( 6) 00:11:20.257 7011.493 - 7040.112: 0.1540% ( 9) 00:11:20.258 7040.112 - 7068.730: 0.2788% ( 17) 00:11:20.258 7068.730 - 7097.348: 0.3961% ( 16) 00:11:20.258 7097.348 - 7125.967: 0.5648% ( 23) 00:11:20.258 7125.967 - 7154.585: 0.7189% ( 21) 00:11:20.258 7154.585 - 7183.203: 0.9316% ( 29) 00:11:20.258 7183.203 - 7211.822: 1.2251% ( 40) 00:11:20.258 7211.822 - 7240.440: 1.5698% ( 47) 00:11:20.258 7240.440 - 7269.059: 1.9733% ( 55) 00:11:20.258 7269.059 - 7297.677: 2.4061% ( 59) 00:11:20.258 7297.677 - 7326.295: 2.8462% ( 60) 00:11:20.258 7326.295 - 7383.532: 3.8146% ( 132) 00:11:20.258 7383.532 - 7440.769: 4.8636% ( 143) 00:11:20.258 7440.769 - 7498.005: 6.1106% ( 170) 00:11:20.258 7498.005 - 7555.242: 7.6878% ( 215) 00:11:20.258 7555.242 - 7612.479: 9.4410% ( 239) 00:11:20.258 7612.479 - 7669.715: 11.4290% ( 271) 00:11:20.258 7669.715 - 7726.952: 13.5563% ( 290) 00:11:20.258 7726.952 - 7784.189: 15.9258% ( 323) 00:11:20.258 7784.189 - 7841.425: 18.4712% ( 347) 00:11:20.258 7841.425 - 7898.662: 21.2075% ( 373) 00:11:20.258 7898.662 - 7955.899: 24.1197% ( 397) 00:11:20.258 7955.899 - 8013.135: 27.1640% ( 415) 00:11:20.258 8013.135 - 8070.372: 30.2890% ( 426) 00:11:20.258 8070.372 - 8127.609: 33.3773% ( 421) 00:11:20.258 8127.609 - 8184.845: 36.4510% ( 419) 00:11:20.258 8184.845 - 8242.082: 39.4806% ( 413) 00:11:20.258 8242.082 - 8299.319: 42.4956% ( 411) 00:11:20.258 8299.319 - 8356.555: 45.3272% ( 386) 00:11:20.258 8356.555 - 8413.792: 48.2248% ( 395) 00:11:20.258 8413.792 - 8471.029: 50.9830% ( 376) 00:11:20.258 8471.029 - 8528.266: 53.6092% ( 358) 00:11:20.258 8528.266 - 8585.502: 55.8612% ( 307) 00:11:20.258 8585.502 - 8642.739: 58.0106% ( 293) 00:11:20.258 8642.739 - 8699.976: 60.0499% ( 278) 00:11:20.258 8699.976 - 8757.212: 61.9205% ( 255) 00:11:20.258 8757.212 - 8814.449: 63.7031% ( 243) 00:11:20.258 8814.449 - 8871.686: 65.1995% ( 204) 00:11:20.258 8871.686 - 8928.922: 66.5420% ( 183) 00:11:20.258 8928.922 - 8986.159: 67.7670% ( 167) 00:11:20.258 8986.159 - 9043.396: 68.8820% ( 152) 00:11:20.258 9043.396 - 9100.632: 69.7917% ( 124) 00:11:20.258 9100.632 - 9157.869: 70.5106% ( 98) 00:11:20.258 9157.869 - 9215.106: 71.1488% ( 87) 00:11:20.258 9215.106 - 9272.342: 71.8163% ( 91) 00:11:20.258 9272.342 - 9329.579: 72.3958% ( 79) 00:11:20.258 9329.579 - 9386.816: 72.9020% ( 69) 00:11:20.258 9386.816 - 9444.052: 73.3641% ( 63) 00:11:20.258 9444.052 - 9501.289: 73.8043% ( 60) 00:11:20.258 9501.289 - 9558.526: 74.2664% ( 63) 00:11:20.258 9558.526 - 9615.762: 74.7433% ( 65) 00:11:20.258 9615.762 - 9672.999: 75.3521% ( 83) 00:11:20.258 9672.999 - 9730.236: 76.0343% ( 93) 00:11:20.258 9730.236 - 9787.472: 76.7165% ( 93) 00:11:20.258 9787.472 - 9844.709: 77.4428% ( 99) 00:11:20.258 9844.709 - 9901.946: 78.1470% ( 96) 00:11:20.258 9901.946 - 9959.183: 78.8292% ( 93) 00:11:20.258 9959.183 - 10016.419: 79.4821% ( 89) 00:11:20.258 10016.419 - 10073.656: 80.1056% ( 85) 00:11:20.258 10073.656 - 10130.893: 80.7512% ( 88) 00:11:20.258 10130.893 - 10188.129: 81.3527% ( 82) 00:11:20.258 10188.129 - 10245.366: 82.0349% ( 93) 00:11:20.258 10245.366 - 10302.603: 82.7391% ( 96) 00:11:20.258 10302.603 - 10359.839: 83.3407% ( 82) 00:11:20.258 10359.839 - 10417.076: 83.8468% ( 69) 00:11:20.258 10417.076 - 10474.313: 84.3750% ( 72) 00:11:20.258 10474.313 - 10531.549: 84.9032% ( 72) 00:11:20.258 10531.549 - 10588.786: 85.3727% ( 64) 00:11:20.258 10588.786 - 10646.023: 85.8055% ( 59) 00:11:20.258 10646.023 - 10703.259: 86.0842% ( 38) 00:11:20.258 10703.259 - 10760.496: 86.3263% ( 33) 00:11:20.258 10760.496 - 10817.733: 86.5537% ( 31) 00:11:20.258 10817.733 - 10874.969: 86.7738% ( 30) 00:11:20.258 10874.969 - 10932.206: 86.9865% ( 29) 00:11:20.258 10932.206 - 10989.443: 87.1699% ( 25) 00:11:20.258 10989.443 - 11046.679: 87.3533% ( 25) 00:11:20.258 11046.679 - 11103.916: 87.5807% ( 31) 00:11:20.258 11103.916 - 11161.153: 87.8008% ( 30) 00:11:20.258 11161.153 - 11218.390: 87.9988% ( 27) 00:11:20.258 11218.390 - 11275.626: 88.1675% ( 23) 00:11:20.258 11275.626 - 11332.863: 88.3069% ( 19) 00:11:20.258 11332.863 - 11390.100: 88.4243% ( 16) 00:11:20.258 11390.100 - 11447.336: 88.5197% ( 13) 00:11:20.258 11447.336 - 11504.573: 88.6370% ( 16) 00:11:20.258 11504.573 - 11561.810: 88.7544% ( 16) 00:11:20.258 11561.810 - 11619.046: 88.8498% ( 13) 00:11:20.258 11619.046 - 11676.283: 88.9598% ( 15) 00:11:20.258 11676.283 - 11733.520: 89.0625% ( 14) 00:11:20.258 11733.520 - 11790.756: 89.1652% ( 14) 00:11:20.258 11790.756 - 11847.993: 89.2752% ( 15) 00:11:20.258 11847.993 - 11905.230: 89.3779% ( 14) 00:11:20.258 11905.230 - 11962.466: 89.4806% ( 14) 00:11:20.258 11962.466 - 12019.703: 89.5833% ( 14) 00:11:20.258 12019.703 - 12076.940: 89.6860% ( 14) 00:11:20.258 12076.940 - 12134.176: 89.7667% ( 11) 00:11:20.258 12134.176 - 12191.413: 89.8254% ( 8) 00:11:20.258 12191.413 - 12248.650: 89.8841% ( 8) 00:11:20.258 12248.650 - 12305.886: 89.9208% ( 5) 00:11:20.258 12305.886 - 12363.123: 89.9575% ( 5) 00:11:20.258 12363.123 - 12420.360: 89.9941% ( 5) 00:11:20.258 12420.360 - 12477.597: 90.0381% ( 6) 00:11:20.258 12477.597 - 12534.833: 90.0748% ( 5) 00:11:20.258 12534.833 - 12592.070: 90.1262% ( 7) 00:11:20.258 12592.070 - 12649.307: 90.2069% ( 11) 00:11:20.258 12649.307 - 12706.543: 90.2876% ( 11) 00:11:20.258 12706.543 - 12763.780: 90.3976% ( 15) 00:11:20.258 12763.780 - 12821.017: 90.5076% ( 15) 00:11:20.258 12821.017 - 12878.253: 90.6397% ( 18) 00:11:20.258 12878.253 - 12935.490: 90.7570% ( 16) 00:11:20.258 12935.490 - 12992.727: 90.8524% ( 13) 00:11:20.258 12992.727 - 13049.963: 90.9624% ( 15) 00:11:20.258 13049.963 - 13107.200: 91.0725% ( 15) 00:11:20.258 13107.200 - 13164.437: 91.1752% ( 14) 00:11:20.258 13164.437 - 13221.673: 91.2852% ( 15) 00:11:20.258 13221.673 - 13278.910: 91.4246% ( 19) 00:11:20.258 13278.910 - 13336.147: 91.6006% ( 24) 00:11:20.258 13336.147 - 13393.383: 91.8207% ( 30) 00:11:20.258 13393.383 - 13450.620: 92.0114% ( 26) 00:11:20.258 13450.620 - 13507.857: 92.2315% ( 30) 00:11:20.258 13507.857 - 13565.093: 92.3929% ( 22) 00:11:20.258 13565.093 - 13622.330: 92.5543% ( 22) 00:11:20.258 13622.330 - 13679.567: 92.6790% ( 17) 00:11:20.258 13679.567 - 13736.803: 92.8184% ( 19) 00:11:20.258 13736.803 - 13794.040: 92.9431% ( 17) 00:11:20.258 13794.040 - 13851.277: 93.0898% ( 20) 00:11:20.258 13851.277 - 13908.514: 93.2438% ( 21) 00:11:20.258 13908.514 - 13965.750: 93.3906% ( 20) 00:11:20.258 13965.750 - 14022.987: 93.5373% ( 20) 00:11:20.258 14022.987 - 14080.224: 93.7133% ( 24) 00:11:20.258 14080.224 - 14137.460: 93.8820% ( 23) 00:11:20.258 14137.460 - 14194.697: 94.0434% ( 22) 00:11:20.258 14194.697 - 14251.934: 94.2415% ( 27) 00:11:20.258 14251.934 - 14309.170: 94.4102% ( 23) 00:11:20.258 14309.170 - 14366.407: 94.5129% ( 14) 00:11:20.258 14366.407 - 14423.644: 94.5936% ( 11) 00:11:20.258 14423.644 - 14480.880: 94.6670% ( 10) 00:11:20.258 14480.880 - 14538.117: 94.7403% ( 10) 00:11:20.258 14538.117 - 14595.354: 94.8063% ( 9) 00:11:20.258 14595.354 - 14652.590: 94.8870% ( 11) 00:11:20.258 14652.590 - 14767.064: 95.0484% ( 22) 00:11:20.258 14767.064 - 14881.537: 95.3052% ( 35) 00:11:20.258 14881.537 - 14996.010: 95.5399% ( 32) 00:11:20.258 14996.010 - 15110.484: 95.7893% ( 34) 00:11:20.258 15110.484 - 15224.957: 96.0021% ( 29) 00:11:20.258 15224.957 - 15339.431: 96.1928% ( 26) 00:11:20.258 15339.431 - 15453.904: 96.4275% ( 32) 00:11:20.258 15453.904 - 15568.377: 96.6183% ( 26) 00:11:20.258 15568.377 - 15682.851: 96.8310% ( 29) 00:11:20.258 15682.851 - 15797.324: 97.0511% ( 30) 00:11:20.258 15797.324 - 15911.797: 97.2491% ( 27) 00:11:20.258 15911.797 - 16026.271: 97.4178% ( 23) 00:11:20.258 16026.271 - 16140.744: 97.5646% ( 20) 00:11:20.258 16140.744 - 16255.217: 97.7333% ( 23) 00:11:20.258 16255.217 - 16369.691: 97.8727% ( 19) 00:11:20.258 16369.691 - 16484.164: 97.9754% ( 14) 00:11:20.258 16484.164 - 16598.638: 98.0267% ( 7) 00:11:20.258 16598.638 - 16713.111: 98.0707% ( 6) 00:11:20.258 16713.111 - 16827.584: 98.0927% ( 3) 00:11:20.258 16827.584 - 16942.058: 98.1221% ( 4) 00:11:20.258 18888.105 - 19002.578: 98.1367% ( 2) 00:11:20.258 19002.578 - 19117.052: 98.1661% ( 4) 00:11:20.258 19117.052 - 19231.525: 98.1954% ( 4) 00:11:20.258 19231.525 - 19345.998: 98.2394% ( 6) 00:11:20.258 19345.998 - 19460.472: 98.2981% ( 8) 00:11:20.258 19460.472 - 19574.945: 98.3495% ( 7) 00:11:20.258 19574.945 - 19689.418: 98.3935% ( 6) 00:11:20.258 19689.418 - 19803.892: 98.4522% ( 8) 00:11:20.258 19803.892 - 19918.365: 98.5035% ( 7) 00:11:20.258 19918.365 - 20032.838: 98.5769% ( 10) 00:11:20.258 20032.838 - 20147.312: 98.6576% ( 11) 00:11:20.258 20147.312 - 20261.785: 98.7163% ( 8) 00:11:20.258 20261.785 - 20376.259: 98.7896% ( 10) 00:11:20.258 20376.259 - 20490.732: 98.8556% ( 9) 00:11:20.258 20490.732 - 20605.205: 98.9290% ( 10) 00:11:20.258 20605.205 - 20719.679: 99.0023% ( 10) 00:11:20.258 20719.679 - 20834.152: 99.0684% ( 9) 00:11:20.258 20834.152 - 20948.625: 99.1491% ( 11) 00:11:20.258 20948.625 - 21063.099: 99.2151% ( 9) 00:11:20.258 21063.099 - 21177.572: 99.2664% ( 7) 00:11:20.258 21177.572 - 21292.045: 99.3178% ( 7) 00:11:20.258 21292.045 - 21406.519: 99.3545% ( 5) 00:11:20.259 21406.519 - 21520.992: 99.3838% ( 4) 00:11:20.259 21520.992 - 21635.466: 99.4058% ( 3) 00:11:20.259 21635.466 - 21749.939: 99.4205% ( 2) 00:11:20.259 21749.939 - 21864.412: 99.4425% ( 3) 00:11:20.259 21864.412 - 21978.886: 99.4645% ( 3) 00:11:20.259 21978.886 - 22093.359: 99.4865% ( 3) 00:11:20.259 22093.359 - 22207.832: 99.5085% ( 3) 00:11:20.259 22207.832 - 22322.306: 99.5232% ( 2) 00:11:20.259 22322.306 - 22436.779: 99.5305% ( 1) 00:11:20.259 29763.074 - 29992.021: 99.5379% ( 1) 00:11:20.259 29992.021 - 30220.968: 99.5745% ( 5) 00:11:20.259 30220.968 - 30449.914: 99.6112% ( 5) 00:11:20.259 30449.914 - 30678.861: 99.6479% ( 5) 00:11:20.259 30678.861 - 30907.808: 99.6846% ( 5) 00:11:20.259 30907.808 - 31136.755: 99.7212% ( 5) 00:11:20.259 31136.755 - 31365.701: 99.7653% ( 6) 00:11:20.259 31365.701 - 31594.648: 99.8019% ( 5) 00:11:20.259 31594.648 - 31823.595: 99.8386% ( 5) 00:11:20.259 31823.595 - 32052.541: 99.8826% ( 6) 00:11:20.259 32052.541 - 32281.488: 99.9193% ( 5) 00:11:20.259 32281.488 - 32510.435: 99.9560% ( 5) 00:11:20.259 32510.435 - 32739.382: 100.0000% ( 6) 00:11:20.259 00:11:20.259 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:20.259 ============================================================================== 00:11:20.259 Range in us Cumulative IO count 00:11:20.259 6811.165 - 6839.783: 0.0073% ( 1) 00:11:20.259 6839.783 - 6868.402: 0.0220% ( 2) 00:11:20.259 6868.402 - 6897.020: 0.0367% ( 2) 00:11:20.259 6897.020 - 6925.638: 0.0440% ( 1) 00:11:20.259 6925.638 - 6954.257: 0.0734% ( 4) 00:11:20.259 6954.257 - 6982.875: 0.1100% ( 5) 00:11:20.259 6982.875 - 7011.493: 0.1907% ( 11) 00:11:20.259 7011.493 - 7040.112: 0.2788% ( 12) 00:11:20.259 7040.112 - 7068.730: 0.3888% ( 15) 00:11:20.259 7068.730 - 7097.348: 0.5208% ( 18) 00:11:20.259 7097.348 - 7125.967: 0.6749% ( 21) 00:11:20.259 7125.967 - 7154.585: 0.8876% ( 29) 00:11:20.259 7154.585 - 7183.203: 1.1004% ( 29) 00:11:20.259 7183.203 - 7211.822: 1.3351% ( 32) 00:11:20.259 7211.822 - 7240.440: 1.6652% ( 45) 00:11:20.259 7240.440 - 7269.059: 1.9880% ( 44) 00:11:20.259 7269.059 - 7297.677: 2.4208% ( 59) 00:11:20.259 7297.677 - 7326.295: 2.8903% ( 64) 00:11:20.259 7326.295 - 7383.532: 3.8659% ( 133) 00:11:20.259 7383.532 - 7440.769: 4.9736% ( 151) 00:11:20.259 7440.769 - 7498.005: 6.2280% ( 171) 00:11:20.259 7498.005 - 7555.242: 7.6878% ( 199) 00:11:20.259 7555.242 - 7612.479: 9.3823% ( 231) 00:11:20.259 7612.479 - 7669.715: 11.2529% ( 255) 00:11:20.259 7669.715 - 7726.952: 13.3729% ( 289) 00:11:20.259 7726.952 - 7784.189: 15.6910% ( 316) 00:11:20.259 7784.189 - 7841.425: 18.2512% ( 349) 00:11:20.259 7841.425 - 7898.662: 21.0094% ( 376) 00:11:20.259 7898.662 - 7955.899: 23.9070% ( 395) 00:11:20.259 7955.899 - 8013.135: 26.9146% ( 410) 00:11:20.259 8013.135 - 8070.372: 30.0323% ( 425) 00:11:20.259 8070.372 - 8127.609: 32.9519% ( 398) 00:11:20.259 8127.609 - 8184.845: 35.9962% ( 415) 00:11:20.259 8184.845 - 8242.082: 38.9891% ( 408) 00:11:20.259 8242.082 - 8299.319: 41.9821% ( 408) 00:11:20.259 8299.319 - 8356.555: 44.9677% ( 407) 00:11:20.259 8356.555 - 8413.792: 47.8213% ( 389) 00:11:20.259 8413.792 - 8471.029: 50.5502% ( 372) 00:11:20.259 8471.029 - 8528.266: 53.0810% ( 345) 00:11:20.259 8528.266 - 8585.502: 55.5311% ( 334) 00:11:20.259 8585.502 - 8642.739: 57.7612% ( 304) 00:11:20.259 8642.739 - 8699.976: 59.7638% ( 273) 00:11:20.259 8699.976 - 8757.212: 61.6050% ( 251) 00:11:20.259 8757.212 - 8814.449: 63.2409% ( 223) 00:11:20.259 8814.449 - 8871.686: 64.6567% ( 193) 00:11:20.259 8871.686 - 8928.922: 65.9771% ( 180) 00:11:20.259 8928.922 - 8986.159: 67.1362% ( 158) 00:11:20.259 8986.159 - 9043.396: 68.1852% ( 143) 00:11:20.259 9043.396 - 9100.632: 69.1461% ( 131) 00:11:20.259 9100.632 - 9157.869: 69.9457% ( 109) 00:11:20.259 9157.869 - 9215.106: 70.7306% ( 107) 00:11:20.259 9215.106 - 9272.342: 71.3688% ( 87) 00:11:20.259 9272.342 - 9329.579: 71.8530% ( 66) 00:11:20.259 9329.579 - 9386.816: 72.3445% ( 67) 00:11:20.259 9386.816 - 9444.052: 72.8506% ( 69) 00:11:20.259 9444.052 - 9501.289: 73.3128% ( 63) 00:11:20.259 9501.289 - 9558.526: 73.7969% ( 66) 00:11:20.259 9558.526 - 9615.762: 74.3325% ( 73) 00:11:20.259 9615.762 - 9672.999: 75.0073% ( 92) 00:11:20.259 9672.999 - 9730.236: 75.6529% ( 88) 00:11:20.259 9730.236 - 9787.472: 76.3204% ( 91) 00:11:20.259 9787.472 - 9844.709: 77.0467% ( 99) 00:11:20.259 9844.709 - 9901.946: 77.7435% ( 95) 00:11:20.259 9901.946 - 9959.183: 78.4698% ( 99) 00:11:20.259 9959.183 - 10016.419: 79.1740% ( 96) 00:11:20.259 10016.419 - 10073.656: 79.8636% ( 94) 00:11:20.259 10073.656 - 10130.893: 80.5531% ( 94) 00:11:20.259 10130.893 - 10188.129: 81.2280% ( 92) 00:11:20.259 10188.129 - 10245.366: 81.8589% ( 86) 00:11:20.259 10245.366 - 10302.603: 82.4897% ( 86) 00:11:20.259 10302.603 - 10359.839: 83.0986% ( 83) 00:11:20.259 10359.839 - 10417.076: 83.6634% ( 77) 00:11:20.259 10417.076 - 10474.313: 84.2796% ( 84) 00:11:20.259 10474.313 - 10531.549: 84.8371% ( 76) 00:11:20.259 10531.549 - 10588.786: 85.3213% ( 66) 00:11:20.259 10588.786 - 10646.023: 85.7321% ( 56) 00:11:20.259 10646.023 - 10703.259: 86.1136% ( 52) 00:11:20.259 10703.259 - 10760.496: 86.4217% ( 42) 00:11:20.259 10760.496 - 10817.733: 86.6857% ( 36) 00:11:20.259 10817.733 - 10874.969: 86.9278% ( 33) 00:11:20.259 10874.969 - 10932.206: 87.1185% ( 26) 00:11:20.259 10932.206 - 10989.443: 87.2799% ( 22) 00:11:20.259 10989.443 - 11046.679: 87.4266% ( 20) 00:11:20.259 11046.679 - 11103.916: 87.5734% ( 20) 00:11:20.259 11103.916 - 11161.153: 87.7274% ( 21) 00:11:20.259 11161.153 - 11218.390: 87.8594% ( 18) 00:11:20.259 11218.390 - 11275.626: 88.0135% ( 21) 00:11:20.259 11275.626 - 11332.863: 88.1969% ( 25) 00:11:20.259 11332.863 - 11390.100: 88.3509% ( 21) 00:11:20.259 11390.100 - 11447.336: 88.4977% ( 20) 00:11:20.259 11447.336 - 11504.573: 88.6370% ( 19) 00:11:20.259 11504.573 - 11561.810: 88.7617% ( 17) 00:11:20.259 11561.810 - 11619.046: 88.8571% ( 13) 00:11:20.259 11619.046 - 11676.283: 88.9671% ( 15) 00:11:20.259 11676.283 - 11733.520: 89.0625% ( 13) 00:11:20.259 11733.520 - 11790.756: 89.2019% ( 19) 00:11:20.259 11790.756 - 11847.993: 89.3119% ( 15) 00:11:20.259 11847.993 - 11905.230: 89.4513% ( 19) 00:11:20.259 11905.230 - 11962.466: 89.5760% ( 17) 00:11:20.259 11962.466 - 12019.703: 89.6860% ( 15) 00:11:20.259 12019.703 - 12076.940: 89.7814% ( 13) 00:11:20.259 12076.940 - 12134.176: 89.8401% ( 8) 00:11:20.259 12134.176 - 12191.413: 89.9281% ( 12) 00:11:20.259 12191.413 - 12248.650: 90.0308% ( 14) 00:11:20.259 12248.650 - 12305.886: 90.1115% ( 11) 00:11:20.259 12305.886 - 12363.123: 90.2142% ( 14) 00:11:20.259 12363.123 - 12420.360: 90.3096% ( 13) 00:11:20.259 12420.360 - 12477.597: 90.4123% ( 14) 00:11:20.259 12477.597 - 12534.833: 90.5076% ( 13) 00:11:20.259 12534.833 - 12592.070: 90.6250% ( 16) 00:11:20.259 12592.070 - 12649.307: 90.7204% ( 13) 00:11:20.259 12649.307 - 12706.543: 90.8231% ( 14) 00:11:20.259 12706.543 - 12763.780: 90.9184% ( 13) 00:11:20.259 12763.780 - 12821.017: 91.0578% ( 19) 00:11:20.259 12821.017 - 12878.253: 91.1458% ( 12) 00:11:20.259 12878.253 - 12935.490: 91.2559% ( 15) 00:11:20.259 12935.490 - 12992.727: 91.3879% ( 18) 00:11:20.259 12992.727 - 13049.963: 91.5200% ( 18) 00:11:20.259 13049.963 - 13107.200: 91.6593% ( 19) 00:11:20.259 13107.200 - 13164.437: 91.7694% ( 15) 00:11:20.259 13164.437 - 13221.673: 91.9014% ( 18) 00:11:20.259 13221.673 - 13278.910: 92.0188% ( 16) 00:11:20.259 13278.910 - 13336.147: 92.1508% ( 18) 00:11:20.259 13336.147 - 13393.383: 92.2829% ( 18) 00:11:20.259 13393.383 - 13450.620: 92.4442% ( 22) 00:11:20.259 13450.620 - 13507.857: 92.6350% ( 26) 00:11:20.259 13507.857 - 13565.093: 92.7964% ( 22) 00:11:20.259 13565.093 - 13622.330: 92.9724% ( 24) 00:11:20.259 13622.330 - 13679.567: 93.1485% ( 24) 00:11:20.259 13679.567 - 13736.803: 93.3099% ( 22) 00:11:20.259 13736.803 - 13794.040: 93.4859% ( 24) 00:11:20.259 13794.040 - 13851.277: 93.6180% ( 18) 00:11:20.259 13851.277 - 13908.514: 93.7647% ( 20) 00:11:20.259 13908.514 - 13965.750: 93.8820% ( 16) 00:11:20.259 13965.750 - 14022.987: 93.9407% ( 8) 00:11:20.259 14022.987 - 14080.224: 94.0214% ( 11) 00:11:20.259 14080.224 - 14137.460: 94.1241% ( 14) 00:11:20.259 14137.460 - 14194.697: 94.2195% ( 13) 00:11:20.259 14194.697 - 14251.934: 94.3222% ( 14) 00:11:20.259 14251.934 - 14309.170: 94.4689% ( 20) 00:11:20.259 14309.170 - 14366.407: 94.6083% ( 19) 00:11:20.259 14366.407 - 14423.644: 94.7330% ( 17) 00:11:20.259 14423.644 - 14480.880: 94.8430% ( 15) 00:11:20.259 14480.880 - 14538.117: 94.9677% ( 17) 00:11:20.259 14538.117 - 14595.354: 95.0778% ( 15) 00:11:20.259 14595.354 - 14652.590: 95.1805% ( 14) 00:11:20.259 14652.590 - 14767.064: 95.3565% ( 24) 00:11:20.259 14767.064 - 14881.537: 95.5399% ( 25) 00:11:20.259 14881.537 - 14996.010: 95.7746% ( 32) 00:11:20.259 14996.010 - 15110.484: 96.0241% ( 34) 00:11:20.259 15110.484 - 15224.957: 96.2295% ( 28) 00:11:20.259 15224.957 - 15339.431: 96.4055% ( 24) 00:11:20.259 15339.431 - 15453.904: 96.5449% ( 19) 00:11:20.259 15453.904 - 15568.377: 96.6769% ( 18) 00:11:20.259 15568.377 - 15682.851: 96.8383% ( 22) 00:11:20.259 15682.851 - 15797.324: 97.0070% ( 23) 00:11:20.260 15797.324 - 15911.797: 97.1758% ( 23) 00:11:20.260 15911.797 - 16026.271: 97.2638% ( 12) 00:11:20.260 16026.271 - 16140.744: 97.3592% ( 13) 00:11:20.260 16140.744 - 16255.217: 97.4692% ( 15) 00:11:20.260 16255.217 - 16369.691: 97.5425% ( 10) 00:11:20.260 16369.691 - 16484.164: 97.6086% ( 9) 00:11:20.260 16484.164 - 16598.638: 97.6599% ( 7) 00:11:20.260 16598.638 - 16713.111: 97.7186% ( 8) 00:11:20.260 16713.111 - 16827.584: 97.7626% ( 6) 00:11:20.260 16827.584 - 16942.058: 97.8140% ( 7) 00:11:20.260 16942.058 - 17056.531: 97.8727% ( 8) 00:11:20.260 17056.531 - 17171.004: 97.9313% ( 8) 00:11:20.260 17171.004 - 17285.478: 97.9900% ( 8) 00:11:20.260 17285.478 - 17399.951: 98.0487% ( 8) 00:11:20.260 17399.951 - 17514.424: 98.1001% ( 7) 00:11:20.260 17514.424 - 17628.898: 98.1441% ( 6) 00:11:20.260 17628.898 - 17743.371: 98.1881% ( 6) 00:11:20.260 17743.371 - 17857.845: 98.2394% ( 7) 00:11:20.260 17857.845 - 17972.318: 98.2614% ( 3) 00:11:20.260 17972.318 - 18086.791: 98.2835% ( 3) 00:11:20.260 18086.791 - 18201.265: 98.3055% ( 3) 00:11:20.260 18201.265 - 18315.738: 98.3275% ( 3) 00:11:20.260 18315.738 - 18430.211: 98.3788% ( 7) 00:11:20.260 18430.211 - 18544.685: 98.4228% ( 6) 00:11:20.260 18544.685 - 18659.158: 98.4742% ( 7) 00:11:20.260 18659.158 - 18773.631: 98.5182% ( 6) 00:11:20.260 18773.631 - 18888.105: 98.5695% ( 7) 00:11:20.260 18888.105 - 19002.578: 98.6209% ( 7) 00:11:20.260 19002.578 - 19117.052: 98.6649% ( 6) 00:11:20.260 19117.052 - 19231.525: 98.7089% ( 6) 00:11:20.260 19231.525 - 19345.998: 98.7529% ( 6) 00:11:20.260 19345.998 - 19460.472: 98.8116% ( 8) 00:11:20.260 19460.472 - 19574.945: 98.8483% ( 5) 00:11:20.260 19574.945 - 19689.418: 98.8923% ( 6) 00:11:20.260 19689.418 - 19803.892: 98.9437% ( 7) 00:11:20.260 19803.892 - 19918.365: 98.9877% ( 6) 00:11:20.260 19918.365 - 20032.838: 99.0317% ( 6) 00:11:20.260 20032.838 - 20147.312: 99.0904% ( 8) 00:11:20.260 20147.312 - 20261.785: 99.1417% ( 7) 00:11:20.260 20261.785 - 20376.259: 99.1931% ( 7) 00:11:20.260 20376.259 - 20490.732: 99.2371% ( 6) 00:11:20.260 20490.732 - 20605.205: 99.2518% ( 2) 00:11:20.260 20605.205 - 20719.679: 99.2811% ( 4) 00:11:20.260 20719.679 - 20834.152: 99.3031% ( 3) 00:11:20.260 20834.152 - 20948.625: 99.3251% ( 3) 00:11:20.260 20948.625 - 21063.099: 99.3545% ( 4) 00:11:20.260 21063.099 - 21177.572: 99.3765% ( 3) 00:11:20.260 21177.572 - 21292.045: 99.4058% ( 4) 00:11:20.260 21292.045 - 21406.519: 99.4278% ( 3) 00:11:20.260 21406.519 - 21520.992: 99.4498% ( 3) 00:11:20.260 21520.992 - 21635.466: 99.4792% ( 4) 00:11:20.260 21635.466 - 21749.939: 99.5012% ( 3) 00:11:20.260 21749.939 - 21864.412: 99.5305% ( 4) 00:11:20.260 27702.554 - 27817.027: 99.5452% ( 2) 00:11:20.260 27817.027 - 27931.500: 99.5672% ( 3) 00:11:20.260 27931.500 - 28045.974: 99.5892% ( 3) 00:11:20.260 28045.974 - 28160.447: 99.6112% ( 3) 00:11:20.260 28160.447 - 28274.921: 99.6332% ( 3) 00:11:20.260 28274.921 - 28389.394: 99.6552% ( 3) 00:11:20.260 28389.394 - 28503.867: 99.6699% ( 2) 00:11:20.260 28503.867 - 28618.341: 99.6919% ( 3) 00:11:20.260 28618.341 - 28732.814: 99.7066% ( 2) 00:11:20.260 28732.814 - 28847.287: 99.7286% ( 3) 00:11:20.260 28847.287 - 28961.761: 99.7506% ( 3) 00:11:20.260 28961.761 - 29076.234: 99.7726% ( 3) 00:11:20.260 29076.234 - 29190.707: 99.7946% ( 3) 00:11:20.260 29190.707 - 29305.181: 99.8093% ( 2) 00:11:20.260 29305.181 - 29534.128: 99.8460% ( 5) 00:11:20.260 29534.128 - 29763.074: 99.8900% ( 6) 00:11:20.260 29763.074 - 29992.021: 99.9413% ( 7) 00:11:20.260 29992.021 - 30220.968: 99.9780% ( 5) 00:11:20.260 30220.968 - 30449.914: 100.0000% ( 3) 00:11:20.260 00:11:20.260 19:25:45 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:21.637 Initializing NVMe Controllers 00:11:21.637 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:21.637 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:21.637 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:21.637 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:21.637 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:21.637 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:21.637 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:21.637 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:21.637 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:21.637 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:21.637 Initialization complete. Launching workers. 00:11:21.637 ======================================================== 00:11:21.637 Latency(us) 00:11:21.637 Device Information : IOPS MiB/s Average min max 00:11:21.637 PCIE (0000:00:10.0) NSID 1 from core 0: 7528.82 88.23 17060.46 9323.99 47796.71 00:11:21.637 PCIE (0000:00:11.0) NSID 1 from core 0: 7528.82 88.23 17027.38 9702.57 45237.82 00:11:21.637 PCIE (0000:00:13.0) NSID 1 from core 0: 7528.82 88.23 16994.05 9694.95 43966.09 00:11:21.637 PCIE (0000:00:12.0) NSID 1 from core 0: 7528.82 88.23 16961.55 9774.11 42176.09 00:11:21.637 PCIE (0000:00:12.0) NSID 2 from core 0: 7528.82 88.23 16928.71 9809.76 40435.32 00:11:21.637 PCIE (0000:00:12.0) NSID 3 from core 0: 7528.82 88.23 16896.07 9574.82 38656.51 00:11:21.637 ======================================================== 00:11:21.637 Total : 45172.91 529.37 16978.04 9323.99 47796.71 00:11:21.637 00:11:21.637 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:21.637 ================================================================================= 00:11:21.637 1.00000% : 10302.603us 00:11:21.637 10.00000% : 11561.810us 00:11:21.637 25.00000% : 12534.833us 00:11:21.637 50.00000% : 14767.064us 00:11:21.637 75.00000% : 17056.531us 00:11:21.637 90.00000% : 31823.595us 00:11:21.637 95.00000% : 35028.849us 00:11:21.637 98.00000% : 36860.423us 00:11:21.637 99.00000% : 38234.103us 00:11:21.637 99.50000% : 46476.185us 00:11:21.637 99.90000% : 47620.919us 00:11:21.637 99.99000% : 47849.866us 00:11:21.637 99.99900% : 47849.866us 00:11:21.637 99.99990% : 47849.866us 00:11:21.637 99.99999% : 47849.866us 00:11:21.637 00:11:21.637 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:21.637 ================================================================================= 00:11:21.637 1.00000% : 10188.129us 00:11:21.637 10.00000% : 11561.810us 00:11:21.637 25.00000% : 12477.597us 00:11:21.637 50.00000% : 14652.590us 00:11:21.637 75.00000% : 17056.531us 00:11:21.637 90.00000% : 32510.435us 00:11:21.637 95.00000% : 34799.902us 00:11:21.637 98.00000% : 36173.583us 00:11:21.637 99.00000% : 37547.263us 00:11:21.637 99.50000% : 43957.771us 00:11:21.637 99.90000% : 45102.505us 00:11:21.637 99.99000% : 45331.452us 00:11:21.637 99.99900% : 45331.452us 00:11:21.637 99.99990% : 45331.452us 00:11:21.637 99.99999% : 45331.452us 00:11:21.637 00:11:21.637 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:21.637 ================================================================================= 00:11:21.637 1.00000% : 10188.129us 00:11:21.637 10.00000% : 11561.810us 00:11:21.637 25.00000% : 12534.833us 00:11:21.637 50.00000% : 14652.590us 00:11:21.637 75.00000% : 16942.058us 00:11:21.637 90.00000% : 32739.382us 00:11:21.637 95.00000% : 34570.955us 00:11:21.637 98.00000% : 36173.583us 00:11:21.637 99.00000% : 37318.316us 00:11:21.637 99.50000% : 42813.038us 00:11:21.637 99.90000% : 43728.824us 00:11:21.637 99.99000% : 44186.718us 00:11:21.637 99.99900% : 44186.718us 00:11:21.637 99.99990% : 44186.718us 00:11:21.637 99.99999% : 44186.718us 00:11:21.637 00:11:21.637 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:21.637 ================================================================================= 00:11:21.637 1.00000% : 10474.313us 00:11:21.637 10.00000% : 11504.573us 00:11:21.637 25.00000% : 12649.307us 00:11:21.637 50.00000% : 14767.064us 00:11:21.637 75.00000% : 16942.058us 00:11:21.637 90.00000% : 32281.488us 00:11:21.637 95.00000% : 34342.009us 00:11:21.637 98.00000% : 35944.636us 00:11:21.637 99.00000% : 37318.316us 00:11:21.637 99.50000% : 40981.464us 00:11:21.637 99.90000% : 42126.197us 00:11:21.637 99.99000% : 42355.144us 00:11:21.637 99.99900% : 42355.144us 00:11:21.637 99.99990% : 42355.144us 00:11:21.637 99.99999% : 42355.144us 00:11:21.637 00:11:21.637 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:21.637 ================================================================================= 00:11:21.637 1.00000% : 10359.839us 00:11:21.637 10.00000% : 11504.573us 00:11:21.637 25.00000% : 12763.780us 00:11:21.637 50.00000% : 14767.064us 00:11:21.637 75.00000% : 16942.058us 00:11:21.637 90.00000% : 31823.595us 00:11:21.637 95.00000% : 34342.009us 00:11:21.637 98.00000% : 35944.636us 00:11:21.637 99.00000% : 37547.263us 00:11:21.637 99.50000% : 39149.890us 00:11:21.637 99.90000% : 40294.624us 00:11:21.637 99.99000% : 40523.570us 00:11:21.637 99.99900% : 40523.570us 00:11:21.637 99.99990% : 40523.570us 00:11:21.637 99.99999% : 40523.570us 00:11:21.637 00:11:21.638 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:21.638 ================================================================================= 00:11:21.638 1.00000% : 10359.839us 00:11:21.638 10.00000% : 11619.046us 00:11:21.638 25.00000% : 12592.070us 00:11:21.638 50.00000% : 14767.064us 00:11:21.638 75.00000% : 16942.058us 00:11:21.638 90.00000% : 31594.648us 00:11:21.638 95.00000% : 34113.062us 00:11:21.638 98.00000% : 35715.689us 00:11:21.638 99.00000% : 37089.369us 00:11:21.638 99.50000% : 37776.210us 00:11:21.638 99.90000% : 38463.050us 00:11:21.638 99.99000% : 38691.997us 00:11:21.638 99.99900% : 38691.997us 00:11:21.638 99.99990% : 38691.997us 00:11:21.638 99.99999% : 38691.997us 00:11:21.638 00:11:21.638 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:21.638 ============================================================================== 00:11:21.638 Range in us Cumulative IO count 00:11:21.638 9272.342 - 9329.579: 0.0132% ( 1) 00:11:21.638 9329.579 - 9386.816: 0.0397% ( 2) 00:11:21.638 9386.816 - 9444.052: 0.1589% ( 9) 00:11:21.638 9444.052 - 9501.289: 0.1986% ( 3) 00:11:21.638 9501.289 - 9558.526: 0.2119% ( 1) 00:11:21.638 9558.526 - 9615.762: 0.2648% ( 4) 00:11:21.638 9615.762 - 9672.999: 0.3046% ( 3) 00:11:21.638 9672.999 - 9730.236: 0.3708% ( 5) 00:11:21.638 9730.236 - 9787.472: 0.3840% ( 1) 00:11:21.638 9787.472 - 9844.709: 0.4237% ( 3) 00:11:21.638 9844.709 - 9901.946: 0.4502% ( 2) 00:11:21.638 9901.946 - 9959.183: 0.4635% ( 1) 00:11:21.638 9959.183 - 10016.419: 0.5297% ( 5) 00:11:21.638 10016.419 - 10073.656: 0.6224% ( 7) 00:11:21.638 10073.656 - 10130.893: 0.7150% ( 7) 00:11:21.638 10130.893 - 10188.129: 0.8210% ( 8) 00:11:21.638 10188.129 - 10245.366: 0.9004% ( 6) 00:11:21.638 10245.366 - 10302.603: 1.0726% ( 13) 00:11:21.638 10302.603 - 10359.839: 1.2447% ( 13) 00:11:21.638 10359.839 - 10417.076: 1.2977% ( 4) 00:11:21.638 10417.076 - 10474.313: 1.4036% ( 8) 00:11:21.638 10474.313 - 10531.549: 1.5228% ( 9) 00:11:21.638 10531.549 - 10588.786: 1.5890% ( 5) 00:11:21.638 10588.786 - 10646.023: 1.6552% ( 5) 00:11:21.638 10646.023 - 10703.259: 1.7744% ( 9) 00:11:21.638 10703.259 - 10760.496: 2.0524% ( 21) 00:11:21.638 10760.496 - 10817.733: 2.3438% ( 22) 00:11:21.638 10817.733 - 10874.969: 2.6086% ( 20) 00:11:21.638 10874.969 - 10932.206: 2.8999% ( 22) 00:11:21.638 10932.206 - 10989.443: 3.5222% ( 47) 00:11:21.638 10989.443 - 11046.679: 4.1049% ( 44) 00:11:21.638 11046.679 - 11103.916: 4.6875% ( 44) 00:11:21.638 11103.916 - 11161.153: 5.1774% ( 37) 00:11:21.638 11161.153 - 11218.390: 5.8660% ( 52) 00:11:21.638 11218.390 - 11275.626: 6.4619% ( 45) 00:11:21.638 11275.626 - 11332.863: 7.3623% ( 68) 00:11:21.638 11332.863 - 11390.100: 8.0376% ( 51) 00:11:21.638 11390.100 - 11447.336: 8.7659% ( 55) 00:11:21.638 11447.336 - 11504.573: 9.6266% ( 65) 00:11:21.638 11504.573 - 11561.810: 10.3416% ( 54) 00:11:21.638 11561.810 - 11619.046: 11.1494% ( 61) 00:11:21.638 11619.046 - 11676.283: 11.9439% ( 60) 00:11:21.638 11676.283 - 11733.520: 12.8575% ( 69) 00:11:21.638 11733.520 - 11790.756: 13.8771% ( 77) 00:11:21.638 11790.756 - 11847.993: 14.7113% ( 63) 00:11:21.638 11847.993 - 11905.230: 15.5058% ( 60) 00:11:21.638 11905.230 - 11962.466: 16.3136% ( 61) 00:11:21.638 11962.466 - 12019.703: 17.2537% ( 71) 00:11:21.638 12019.703 - 12076.940: 18.1144% ( 65) 00:11:21.638 12076.940 - 12134.176: 18.9751% ( 65) 00:11:21.638 12134.176 - 12191.413: 19.9550% ( 74) 00:11:21.638 12191.413 - 12248.650: 20.8289% ( 66) 00:11:21.638 12248.650 - 12305.886: 21.6631% ( 63) 00:11:21.638 12305.886 - 12363.123: 22.5371% ( 66) 00:11:21.638 12363.123 - 12420.360: 23.6096% ( 81) 00:11:21.638 12420.360 - 12477.597: 24.8411% ( 93) 00:11:21.638 12477.597 - 12534.833: 25.8872% ( 79) 00:11:21.638 12534.833 - 12592.070: 26.7611% ( 66) 00:11:21.638 12592.070 - 12649.307: 27.3835% ( 47) 00:11:21.638 12649.307 - 12706.543: 28.0588% ( 51) 00:11:21.638 12706.543 - 12763.780: 28.6811% ( 47) 00:11:21.638 12763.780 - 12821.017: 29.2770% ( 45) 00:11:21.638 12821.017 - 12878.253: 29.9126% ( 48) 00:11:21.638 12878.253 - 12935.490: 30.5879% ( 51) 00:11:21.638 12935.490 - 12992.727: 31.1706% ( 44) 00:11:21.638 12992.727 - 13049.963: 31.7002% ( 40) 00:11:21.638 13049.963 - 13107.200: 32.3623% ( 50) 00:11:21.638 13107.200 - 13164.437: 32.8257% ( 35) 00:11:21.638 13164.437 - 13221.673: 33.4481% ( 47) 00:11:21.638 13221.673 - 13278.910: 34.3353% ( 67) 00:11:21.638 13278.910 - 13336.147: 34.9311% ( 45) 00:11:21.638 13336.147 - 13393.383: 35.5138% ( 44) 00:11:21.638 13393.383 - 13450.620: 36.0699% ( 42) 00:11:21.638 13450.620 - 13507.857: 36.6261% ( 42) 00:11:21.638 13507.857 - 13565.093: 37.0895% ( 35) 00:11:21.638 13565.093 - 13622.330: 37.6059% ( 39) 00:11:21.638 13622.330 - 13679.567: 38.4534% ( 64) 00:11:21.638 13679.567 - 13736.803: 39.1155% ( 50) 00:11:21.638 13736.803 - 13794.040: 39.7775% ( 50) 00:11:21.638 13794.040 - 13851.277: 40.3602% ( 44) 00:11:21.638 13851.277 - 13908.514: 41.1017% ( 56) 00:11:21.638 13908.514 - 13965.750: 41.7373% ( 48) 00:11:21.638 13965.750 - 14022.987: 42.2934% ( 42) 00:11:21.638 14022.987 - 14080.224: 42.9555% ( 50) 00:11:21.638 14080.224 - 14137.460: 43.5381% ( 44) 00:11:21.638 14137.460 - 14194.697: 44.0810% ( 41) 00:11:21.638 14194.697 - 14251.934: 44.6769% ( 45) 00:11:21.638 14251.934 - 14309.170: 45.1404% ( 35) 00:11:21.638 14309.170 - 14366.407: 45.6171% ( 36) 00:11:21.638 14366.407 - 14423.644: 46.1864% ( 43) 00:11:21.638 14423.644 - 14480.880: 46.8882% ( 53) 00:11:21.638 14480.880 - 14538.117: 47.5900% ( 53) 00:11:21.638 14538.117 - 14595.354: 48.2521% ( 50) 00:11:21.638 14595.354 - 14652.590: 49.0731% ( 62) 00:11:21.638 14652.590 - 14767.064: 50.3840% ( 99) 00:11:21.638 14767.064 - 14881.537: 51.9465% ( 118) 00:11:21.638 14881.537 - 14996.010: 53.6414% ( 128) 00:11:21.638 14996.010 - 15110.484: 55.2436% ( 121) 00:11:21.638 15110.484 - 15224.957: 56.8591% ( 122) 00:11:21.638 15224.957 - 15339.431: 58.4216% ( 118) 00:11:21.638 15339.431 - 15453.904: 60.0106% ( 120) 00:11:21.638 15453.904 - 15568.377: 61.8379% ( 138) 00:11:21.638 15568.377 - 15682.851: 63.6255% ( 135) 00:11:21.638 15682.851 - 15797.324: 65.3469% ( 130) 00:11:21.638 15797.324 - 15911.797: 66.7240% ( 104) 00:11:21.638 15911.797 - 16026.271: 67.7701% ( 79) 00:11:21.638 16026.271 - 16140.744: 68.8427% ( 81) 00:11:21.638 16140.744 - 16255.217: 69.7961% ( 72) 00:11:21.638 16255.217 - 16369.691: 70.8289% ( 78) 00:11:21.638 16369.691 - 16484.164: 71.6102% ( 59) 00:11:21.638 16484.164 - 16598.638: 72.1531% ( 41) 00:11:21.638 16598.638 - 16713.111: 72.8814% ( 55) 00:11:21.638 16713.111 - 16827.584: 73.7421% ( 65) 00:11:21.638 16827.584 - 16942.058: 74.6028% ( 65) 00:11:21.638 16942.058 - 17056.531: 75.2119% ( 46) 00:11:21.638 17056.531 - 17171.004: 75.8872% ( 51) 00:11:21.638 17171.004 - 17285.478: 76.7082% ( 62) 00:11:21.638 17285.478 - 17399.951: 77.5026% ( 60) 00:11:21.638 17399.951 - 17514.424: 78.0853% ( 44) 00:11:21.638 17514.424 - 17628.898: 78.6149% ( 40) 00:11:21.638 17628.898 - 17743.371: 79.1843% ( 43) 00:11:21.638 17743.371 - 17857.845: 79.4889% ( 23) 00:11:21.638 17857.845 - 17972.318: 79.7537% ( 20) 00:11:21.638 17972.318 - 18086.791: 80.0185% ( 20) 00:11:21.638 18086.791 - 18201.265: 80.3099% ( 22) 00:11:21.638 18201.265 - 18315.738: 80.6674% ( 27) 00:11:21.638 18315.738 - 18430.211: 80.9719% ( 23) 00:11:21.638 18430.211 - 18544.685: 81.1838% ( 16) 00:11:21.638 18544.685 - 18659.158: 81.4354% ( 19) 00:11:21.638 18659.158 - 18773.631: 81.6208% ( 14) 00:11:21.638 18773.631 - 18888.105: 81.7929% ( 13) 00:11:21.638 18888.105 - 19002.578: 82.0048% ( 16) 00:11:21.638 19002.578 - 19117.052: 82.1901% ( 14) 00:11:21.638 19117.052 - 19231.525: 82.3755% ( 14) 00:11:21.638 19231.525 - 19345.998: 82.5477% ( 13) 00:11:21.638 19345.998 - 19460.472: 82.6933% ( 11) 00:11:21.638 19460.472 - 19574.945: 82.8125% ( 9) 00:11:21.638 19574.945 - 19689.418: 82.9449% ( 10) 00:11:21.638 19689.418 - 19803.892: 83.0111% ( 5) 00:11:21.638 19803.892 - 19918.365: 83.0773% ( 5) 00:11:21.638 19918.365 - 20032.838: 83.1700% ( 7) 00:11:21.638 20032.838 - 20147.312: 83.3289% ( 12) 00:11:21.638 20147.312 - 20261.785: 83.4216% ( 7) 00:11:21.638 20261.785 - 20376.259: 83.5275% ( 8) 00:11:21.638 20376.259 - 20490.732: 83.5938% ( 5) 00:11:21.638 20490.732 - 20605.205: 83.6864% ( 7) 00:11:21.638 20605.205 - 20719.679: 83.7791% ( 7) 00:11:21.638 20719.679 - 20834.152: 83.8718% ( 7) 00:11:21.638 20834.152 - 20948.625: 83.9645% ( 7) 00:11:21.638 20948.625 - 21063.099: 84.1367% ( 13) 00:11:21.638 21063.099 - 21177.572: 84.1896% ( 4) 00:11:21.638 21177.572 - 21292.045: 84.2558% ( 5) 00:11:21.638 21292.045 - 21406.519: 84.3750% ( 9) 00:11:21.638 21406.519 - 21520.992: 84.4677% ( 7) 00:11:21.638 21520.992 - 21635.466: 84.5207% ( 4) 00:11:21.638 21635.466 - 21749.939: 84.5869% ( 5) 00:11:21.638 21749.939 - 21864.412: 84.6266% ( 3) 00:11:21.638 21864.412 - 21978.886: 84.6531% ( 2) 00:11:21.638 21978.886 - 22093.359: 84.6928% ( 3) 00:11:21.638 22093.359 - 22207.832: 84.7458% ( 4) 00:11:21.638 22207.832 - 22322.306: 84.7590% ( 1) 00:11:21.638 22322.306 - 22436.779: 84.8252% ( 5) 00:11:21.638 22436.779 - 22551.252: 84.8517% ( 2) 00:11:21.638 22551.252 - 22665.726: 84.8782% ( 2) 00:11:21.638 22665.726 - 22780.199: 84.9179% ( 3) 00:11:21.638 22780.199 - 22894.672: 84.9576% ( 3) 00:11:21.638 22894.672 - 23009.146: 84.9974% ( 3) 00:11:21.638 23009.146 - 23123.619: 85.0768% ( 6) 00:11:21.638 23123.619 - 23238.093: 85.1562% ( 6) 00:11:21.639 23238.093 - 23352.566: 85.2092% ( 4) 00:11:21.639 23352.566 - 23467.039: 85.2887% ( 6) 00:11:21.639 23467.039 - 23581.513: 85.3814% ( 7) 00:11:21.639 23581.513 - 23695.986: 85.4211% ( 3) 00:11:21.639 23695.986 - 23810.459: 85.4740% ( 4) 00:11:21.639 23810.459 - 23924.933: 85.5667% ( 7) 00:11:21.639 23924.933 - 24039.406: 85.6065% ( 3) 00:11:21.639 24039.406 - 24153.879: 85.6462% ( 3) 00:11:21.639 24153.879 - 24268.353: 85.7124% ( 5) 00:11:21.639 24268.353 - 24382.826: 85.7389% ( 2) 00:11:21.639 24382.826 - 24497.300: 85.7786% ( 3) 00:11:21.639 24497.300 - 24611.773: 85.8316% ( 4) 00:11:21.639 24611.773 - 24726.246: 85.8713% ( 3) 00:11:21.639 24726.246 - 24840.720: 85.9243% ( 4) 00:11:21.639 24840.720 - 24955.193: 85.9507% ( 2) 00:11:21.639 24955.193 - 25069.666: 85.9905% ( 3) 00:11:21.639 25069.666 - 25184.140: 86.0567% ( 5) 00:11:21.639 25184.140 - 25298.613: 86.0964% ( 3) 00:11:21.639 25298.613 - 25413.086: 86.1229% ( 2) 00:11:21.639 25413.086 - 25527.560: 86.1361% ( 1) 00:11:21.639 25527.560 - 25642.033: 86.1626% ( 2) 00:11:21.639 25642.033 - 25756.507: 86.1891% ( 2) 00:11:21.639 25756.507 - 25870.980: 86.2156% ( 2) 00:11:21.639 25870.980 - 25985.453: 86.2553% ( 3) 00:11:21.639 25985.453 - 26099.927: 86.3083% ( 4) 00:11:21.639 26099.927 - 26214.400: 86.3877% ( 6) 00:11:21.639 26214.400 - 26328.873: 86.4407% ( 4) 00:11:21.639 26328.873 - 26443.347: 86.4936% ( 4) 00:11:21.639 26443.347 - 26557.820: 86.5334% ( 3) 00:11:21.639 26557.820 - 26672.293: 86.5731% ( 3) 00:11:21.639 26672.293 - 26786.767: 86.6393% ( 5) 00:11:21.639 26786.767 - 26901.240: 86.6658% ( 2) 00:11:21.639 26901.240 - 27015.714: 86.7055% ( 3) 00:11:21.639 27359.134 - 27473.607: 86.7188% ( 1) 00:11:21.639 27473.607 - 27588.080: 86.7717% ( 4) 00:11:21.639 27588.080 - 27702.554: 86.8114% ( 3) 00:11:21.639 27702.554 - 27817.027: 86.8512% ( 3) 00:11:21.639 27817.027 - 27931.500: 86.9041% ( 4) 00:11:21.639 27931.500 - 28045.974: 86.9439% ( 3) 00:11:21.639 28045.974 - 28160.447: 86.9836% ( 3) 00:11:21.639 28160.447 - 28274.921: 87.0365% ( 4) 00:11:21.639 28274.921 - 28389.394: 87.0895% ( 4) 00:11:21.639 28389.394 - 28503.867: 87.1557% ( 5) 00:11:21.639 28503.867 - 28618.341: 87.2352% ( 6) 00:11:21.639 28618.341 - 28732.814: 87.3279% ( 7) 00:11:21.639 28732.814 - 28847.287: 87.4073% ( 6) 00:11:21.639 28847.287 - 28961.761: 87.4735% ( 5) 00:11:21.639 28961.761 - 29076.234: 87.5265% ( 4) 00:11:21.639 29076.234 - 29190.707: 87.5794% ( 4) 00:11:21.639 29190.707 - 29305.181: 87.6721% ( 7) 00:11:21.639 29305.181 - 29534.128: 87.8046% ( 10) 00:11:21.639 29534.128 - 29763.074: 87.9370% ( 10) 00:11:21.639 29763.074 - 29992.021: 88.0429% ( 8) 00:11:21.639 29992.021 - 30220.968: 88.0959% ( 4) 00:11:21.639 30220.968 - 30449.914: 88.2018% ( 8) 00:11:21.639 30449.914 - 30678.861: 88.3739% ( 13) 00:11:21.639 30678.861 - 30907.808: 88.6255% ( 19) 00:11:21.639 30907.808 - 31136.755: 88.9566% ( 25) 00:11:21.639 31136.755 - 31365.701: 89.2082% ( 19) 00:11:21.639 31365.701 - 31594.648: 89.5392% ( 25) 00:11:21.639 31594.648 - 31823.595: 90.0159% ( 36) 00:11:21.639 31823.595 - 32052.541: 90.4131% ( 30) 00:11:21.639 32052.541 - 32281.488: 90.8369% ( 32) 00:11:21.639 32281.488 - 32510.435: 91.3400% ( 38) 00:11:21.639 32510.435 - 32739.382: 91.7373% ( 30) 00:11:21.639 32739.382 - 32968.328: 92.1478% ( 31) 00:11:21.639 32968.328 - 33197.275: 92.4656% ( 24) 00:11:21.639 33197.275 - 33426.222: 92.7966% ( 25) 00:11:21.639 33426.222 - 33655.169: 93.1939% ( 30) 00:11:21.639 33655.169 - 33884.115: 93.5381% ( 26) 00:11:21.639 33884.115 - 34113.062: 93.8824% ( 26) 00:11:21.639 34113.062 - 34342.009: 94.1605% ( 21) 00:11:21.639 34342.009 - 34570.955: 94.4650% ( 23) 00:11:21.639 34570.955 - 34799.902: 94.7828% ( 24) 00:11:21.639 34799.902 - 35028.849: 95.1933% ( 31) 00:11:21.639 35028.849 - 35257.796: 95.6833% ( 37) 00:11:21.639 35257.796 - 35486.742: 96.0805% ( 30) 00:11:21.639 35486.742 - 35715.689: 96.5175% ( 33) 00:11:21.639 35715.689 - 35944.636: 96.8750% ( 27) 00:11:21.639 35944.636 - 36173.583: 97.2855% ( 31) 00:11:21.639 36173.583 - 36402.529: 97.6298% ( 26) 00:11:21.639 36402.529 - 36631.476: 97.9078% ( 21) 00:11:21.639 36631.476 - 36860.423: 98.1594% ( 19) 00:11:21.639 36860.423 - 37089.369: 98.4507% ( 22) 00:11:21.639 37089.369 - 37318.316: 98.6626% ( 16) 00:11:21.639 37318.316 - 37547.263: 98.7685% ( 8) 00:11:21.639 37547.263 - 37776.210: 98.8612% ( 7) 00:11:21.639 37776.210 - 38005.156: 98.9936% ( 10) 00:11:21.639 38005.156 - 38234.103: 99.0599% ( 5) 00:11:21.639 38234.103 - 38463.050: 99.1128% ( 4) 00:11:21.639 38463.050 - 38691.997: 99.1393% ( 2) 00:11:21.639 38691.997 - 38920.943: 99.1525% ( 1) 00:11:21.639 45102.505 - 45331.452: 99.1790% ( 2) 00:11:21.639 45331.452 - 45560.398: 99.2585% ( 6) 00:11:21.639 45560.398 - 45789.345: 99.3247% ( 5) 00:11:21.639 45789.345 - 46018.292: 99.4041% ( 6) 00:11:21.639 46018.292 - 46247.238: 99.4836% ( 6) 00:11:21.639 46247.238 - 46476.185: 99.5630% ( 6) 00:11:21.639 46476.185 - 46705.132: 99.6425% ( 6) 00:11:21.639 46705.132 - 46934.079: 99.7219% ( 6) 00:11:21.639 46934.079 - 47163.025: 99.8014% ( 6) 00:11:21.639 47163.025 - 47391.972: 99.8808% ( 6) 00:11:21.639 47391.972 - 47620.919: 99.9470% ( 5) 00:11:21.639 47620.919 - 47849.866: 100.0000% ( 4) 00:11:21.639 00:11:21.639 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:21.639 ============================================================================== 00:11:21.639 Range in us Cumulative IO count 00:11:21.639 9672.999 - 9730.236: 0.0397% ( 3) 00:11:21.639 9730.236 - 9787.472: 0.1324% ( 7) 00:11:21.639 9787.472 - 9844.709: 0.2251% ( 7) 00:11:21.639 9844.709 - 9901.946: 0.3310% ( 8) 00:11:21.639 9901.946 - 9959.183: 0.5694% ( 18) 00:11:21.639 9959.183 - 10016.419: 0.6621% ( 7) 00:11:21.639 10016.419 - 10073.656: 0.8210% ( 12) 00:11:21.639 10073.656 - 10130.893: 0.9666% ( 11) 00:11:21.639 10130.893 - 10188.129: 1.1123% ( 11) 00:11:21.639 10188.129 - 10245.366: 1.3242% ( 16) 00:11:21.639 10245.366 - 10302.603: 1.4301% ( 8) 00:11:21.639 10302.603 - 10359.839: 1.5095% ( 6) 00:11:21.639 10359.839 - 10417.076: 1.5890% ( 6) 00:11:21.639 10417.076 - 10474.313: 1.6287% ( 3) 00:11:21.639 10474.313 - 10531.549: 1.6684% ( 3) 00:11:21.639 10531.549 - 10588.786: 1.7082% ( 3) 00:11:21.639 10646.023 - 10703.259: 1.7479% ( 3) 00:11:21.639 10703.259 - 10760.496: 1.8273% ( 6) 00:11:21.639 10760.496 - 10817.733: 1.9068% ( 6) 00:11:21.639 10817.733 - 10874.969: 1.9995% ( 7) 00:11:21.639 10874.969 - 10932.206: 2.1981% ( 15) 00:11:21.639 10932.206 - 10989.443: 2.4364% ( 18) 00:11:21.639 10989.443 - 11046.679: 2.7145% ( 21) 00:11:21.639 11046.679 - 11103.916: 3.0588% ( 26) 00:11:21.639 11103.916 - 11161.153: 3.4296% ( 28) 00:11:21.639 11161.153 - 11218.390: 4.0519% ( 47) 00:11:21.639 11218.390 - 11275.626: 4.9921% ( 71) 00:11:21.639 11275.626 - 11332.863: 5.9190% ( 70) 00:11:21.639 11332.863 - 11390.100: 6.9915% ( 81) 00:11:21.639 11390.100 - 11447.336: 7.9582% ( 73) 00:11:21.639 11447.336 - 11504.573: 9.0440% ( 82) 00:11:21.639 11504.573 - 11561.810: 10.1033% ( 80) 00:11:21.639 11561.810 - 11619.046: 11.1494% ( 79) 00:11:21.639 11619.046 - 11676.283: 11.9703% ( 62) 00:11:21.639 11676.283 - 11733.520: 12.6589% ( 52) 00:11:21.639 11733.520 - 11790.756: 13.4269% ( 58) 00:11:21.639 11790.756 - 11847.993: 14.3141% ( 67) 00:11:21.639 11847.993 - 11905.230: 15.2410% ( 70) 00:11:21.639 11905.230 - 11962.466: 16.3533% ( 84) 00:11:21.639 11962.466 - 12019.703: 17.5450% ( 90) 00:11:21.639 12019.703 - 12076.940: 18.9751% ( 108) 00:11:21.639 12076.940 - 12134.176: 20.1933% ( 92) 00:11:21.639 12134.176 - 12191.413: 21.2394% ( 79) 00:11:21.639 12191.413 - 12248.650: 22.3385% ( 83) 00:11:21.639 12248.650 - 12305.886: 23.1727% ( 63) 00:11:21.639 12305.886 - 12363.123: 23.9407% ( 58) 00:11:21.639 12363.123 - 12420.360: 24.7881% ( 64) 00:11:21.639 12420.360 - 12477.597: 25.8607% ( 81) 00:11:21.639 12477.597 - 12534.833: 26.7214% ( 65) 00:11:21.639 12534.833 - 12592.070: 27.5689% ( 64) 00:11:21.639 12592.070 - 12649.307: 28.3236% ( 57) 00:11:21.639 12649.307 - 12706.543: 29.1446% ( 62) 00:11:21.639 12706.543 - 12763.780: 29.8332% ( 52) 00:11:21.639 12763.780 - 12821.017: 30.5217% ( 52) 00:11:21.639 12821.017 - 12878.253: 31.2500% ( 55) 00:11:21.639 12878.253 - 12935.490: 31.9518% ( 53) 00:11:21.639 12935.490 - 12992.727: 32.5344% ( 44) 00:11:21.639 12992.727 - 13049.963: 33.2495% ( 54) 00:11:21.639 13049.963 - 13107.200: 33.7659% ( 39) 00:11:21.639 13107.200 - 13164.437: 34.2691% ( 38) 00:11:21.639 13164.437 - 13221.673: 34.8120% ( 41) 00:11:21.639 13221.673 - 13278.910: 35.2887% ( 36) 00:11:21.639 13278.910 - 13336.147: 35.7124% ( 32) 00:11:21.639 13336.147 - 13393.383: 36.1891% ( 36) 00:11:21.639 13393.383 - 13450.620: 36.5599% ( 28) 00:11:21.639 13450.620 - 13507.857: 37.1028% ( 41) 00:11:21.639 13507.857 - 13565.093: 37.6324% ( 40) 00:11:21.639 13565.093 - 13622.330: 38.3739% ( 56) 00:11:21.639 13622.330 - 13679.567: 39.0890% ( 54) 00:11:21.639 13679.567 - 13736.803: 39.7246% ( 48) 00:11:21.639 13736.803 - 13794.040: 40.2940% ( 43) 00:11:21.639 13794.040 - 13851.277: 40.9958% ( 53) 00:11:21.639 13851.277 - 13908.514: 41.6578% ( 50) 00:11:21.639 13908.514 - 13965.750: 42.3332% ( 51) 00:11:21.639 13965.750 - 14022.987: 43.0217% ( 52) 00:11:21.640 14022.987 - 14080.224: 43.7235% ( 53) 00:11:21.640 14080.224 - 14137.460: 44.5312% ( 61) 00:11:21.640 14137.460 - 14194.697: 45.3257% ( 60) 00:11:21.640 14194.697 - 14251.934: 46.1864% ( 65) 00:11:21.640 14251.934 - 14309.170: 47.0869% ( 68) 00:11:21.640 14309.170 - 14366.407: 47.6695% ( 44) 00:11:21.640 14366.407 - 14423.644: 48.1859% ( 39) 00:11:21.640 14423.644 - 14480.880: 48.6494% ( 35) 00:11:21.640 14480.880 - 14538.117: 49.0996% ( 34) 00:11:21.640 14538.117 - 14595.354: 49.5895% ( 37) 00:11:21.640 14595.354 - 14652.590: 50.0000% ( 31) 00:11:21.640 14652.590 - 14767.064: 51.0858% ( 82) 00:11:21.640 14767.064 - 14881.537: 52.1849% ( 83) 00:11:21.640 14881.537 - 14996.010: 53.2044% ( 77) 00:11:21.640 14996.010 - 15110.484: 54.2505% ( 79) 00:11:21.640 15110.484 - 15224.957: 55.5350% ( 97) 00:11:21.640 15224.957 - 15339.431: 57.2166% ( 127) 00:11:21.640 15339.431 - 15453.904: 59.3750% ( 163) 00:11:21.640 15453.904 - 15568.377: 61.5201% ( 162) 00:11:21.640 15568.377 - 15682.851: 63.5328% ( 152) 00:11:21.640 15682.851 - 15797.324: 64.9629% ( 108) 00:11:21.640 15797.324 - 15911.797: 66.3533% ( 105) 00:11:21.640 15911.797 - 16026.271: 67.5185% ( 88) 00:11:21.640 16026.271 - 16140.744: 68.6043% ( 82) 00:11:21.640 16140.744 - 16255.217: 70.0742% ( 111) 00:11:21.640 16255.217 - 16369.691: 71.1467% ( 81) 00:11:21.640 16369.691 - 16484.164: 72.0074% ( 65) 00:11:21.640 16484.164 - 16598.638: 72.8151% ( 61) 00:11:21.640 16598.638 - 16713.111: 73.5434% ( 55) 00:11:21.640 16713.111 - 16827.584: 74.1261% ( 44) 00:11:21.640 16827.584 - 16942.058: 74.8146% ( 52) 00:11:21.640 16942.058 - 17056.531: 75.4899% ( 51) 00:11:21.640 17056.531 - 17171.004: 76.0196% ( 40) 00:11:21.640 17171.004 - 17285.478: 76.5625% ( 41) 00:11:21.640 17285.478 - 17399.951: 76.9730% ( 31) 00:11:21.640 17399.951 - 17514.424: 77.3570% ( 29) 00:11:21.640 17514.424 - 17628.898: 78.0720% ( 54) 00:11:21.640 17628.898 - 17743.371: 78.5752% ( 38) 00:11:21.640 17743.371 - 17857.845: 79.0122% ( 33) 00:11:21.640 17857.845 - 17972.318: 79.6875% ( 51) 00:11:21.640 17972.318 - 18086.791: 80.1774% ( 37) 00:11:21.640 18086.791 - 18201.265: 80.4952% ( 24) 00:11:21.640 18201.265 - 18315.738: 80.8395% ( 26) 00:11:21.640 18315.738 - 18430.211: 81.2500% ( 31) 00:11:21.640 18430.211 - 18544.685: 81.4619% ( 16) 00:11:21.640 18544.685 - 18659.158: 81.6870% ( 17) 00:11:21.640 18659.158 - 18773.631: 81.8326% ( 11) 00:11:21.640 18773.631 - 18888.105: 81.9518% ( 9) 00:11:21.640 18888.105 - 19002.578: 82.0842% ( 10) 00:11:21.640 19002.578 - 19117.052: 82.1769% ( 7) 00:11:21.640 19117.052 - 19231.525: 82.2034% ( 2) 00:11:21.640 19231.525 - 19345.998: 82.2299% ( 2) 00:11:21.640 19345.998 - 19460.472: 82.2696% ( 3) 00:11:21.640 19460.472 - 19574.945: 82.3358% ( 5) 00:11:21.640 19574.945 - 19689.418: 82.5212% ( 14) 00:11:21.640 19689.418 - 19803.892: 82.6933% ( 13) 00:11:21.640 19803.892 - 19918.365: 82.8125% ( 9) 00:11:21.640 19918.365 - 20032.838: 82.9449% ( 10) 00:11:21.640 20032.838 - 20147.312: 83.3289% ( 29) 00:11:21.640 20147.312 - 20261.785: 83.4746% ( 11) 00:11:21.640 20261.785 - 20376.259: 83.6070% ( 10) 00:11:21.640 20376.259 - 20490.732: 83.6997% ( 7) 00:11:21.640 20490.732 - 20605.205: 83.8453% ( 11) 00:11:21.640 20605.205 - 20719.679: 83.9513% ( 8) 00:11:21.640 20719.679 - 20834.152: 84.0837% ( 10) 00:11:21.640 20834.152 - 20948.625: 84.2293% ( 11) 00:11:21.640 20948.625 - 21063.099: 84.3750% ( 11) 00:11:21.640 21063.099 - 21177.572: 84.4544% ( 6) 00:11:21.640 21177.572 - 21292.045: 84.5471% ( 7) 00:11:21.640 21292.045 - 21406.519: 84.6398% ( 7) 00:11:21.640 21406.519 - 21520.992: 84.6928% ( 4) 00:11:21.640 21520.992 - 21635.466: 84.7722% ( 6) 00:11:21.640 21635.466 - 21749.939: 84.8649% ( 7) 00:11:21.640 21749.939 - 21864.412: 84.9179% ( 4) 00:11:21.640 21864.412 - 21978.886: 85.0106% ( 7) 00:11:21.640 21978.886 - 22093.359: 85.0900% ( 6) 00:11:21.640 22093.359 - 22207.832: 85.1562% ( 5) 00:11:21.640 22207.832 - 22322.306: 85.2357% ( 6) 00:11:21.640 22322.306 - 22436.779: 85.3549% ( 9) 00:11:21.640 22436.779 - 22551.252: 85.4343% ( 6) 00:11:21.640 22551.252 - 22665.726: 85.5005% ( 5) 00:11:21.640 22665.726 - 22780.199: 85.5403% ( 3) 00:11:21.640 22780.199 - 22894.672: 85.5667% ( 2) 00:11:21.640 22894.672 - 23009.146: 85.5800% ( 1) 00:11:21.640 23009.146 - 23123.619: 85.5932% ( 1) 00:11:21.640 25069.666 - 25184.140: 85.6197% ( 2) 00:11:21.640 25184.140 - 25298.613: 85.7256% ( 8) 00:11:21.640 25298.613 - 25413.086: 85.7918% ( 5) 00:11:21.640 25413.086 - 25527.560: 85.8581% ( 5) 00:11:21.640 25527.560 - 25642.033: 85.8845% ( 2) 00:11:21.640 25642.033 - 25756.507: 85.9243% ( 3) 00:11:21.640 25756.507 - 25870.980: 85.9640% ( 3) 00:11:21.640 25870.980 - 25985.453: 85.9905% ( 2) 00:11:21.640 25985.453 - 26099.927: 86.0699% ( 6) 00:11:21.640 26099.927 - 26214.400: 86.1096% ( 3) 00:11:21.640 26214.400 - 26328.873: 86.1758% ( 5) 00:11:21.640 26328.873 - 26443.347: 86.2288% ( 4) 00:11:21.640 26443.347 - 26557.820: 86.2950% ( 5) 00:11:21.640 26557.820 - 26672.293: 86.3612% ( 5) 00:11:21.640 26672.293 - 26786.767: 86.4274% ( 5) 00:11:21.640 26786.767 - 26901.240: 86.4804% ( 4) 00:11:21.640 26901.240 - 27015.714: 86.5466% ( 5) 00:11:21.640 27015.714 - 27130.187: 86.5996% ( 4) 00:11:21.640 27130.187 - 27244.660: 86.6658% ( 5) 00:11:21.640 27244.660 - 27359.134: 86.7188% ( 4) 00:11:21.640 27359.134 - 27473.607: 86.7982% ( 6) 00:11:21.640 27473.607 - 27588.080: 86.8379% ( 3) 00:11:21.640 27588.080 - 27702.554: 86.9041% ( 5) 00:11:21.640 27702.554 - 27817.027: 86.9439% ( 3) 00:11:21.640 27817.027 - 27931.500: 86.9836% ( 3) 00:11:21.640 27931.500 - 28045.974: 87.0630% ( 6) 00:11:21.640 28045.974 - 28160.447: 87.1690% ( 8) 00:11:21.640 28160.447 - 28274.921: 87.3146% ( 11) 00:11:21.640 28274.921 - 28389.394: 87.4206% ( 8) 00:11:21.640 28389.394 - 28503.867: 87.5397% ( 9) 00:11:21.640 28503.867 - 28618.341: 87.6192% ( 6) 00:11:21.640 28618.341 - 28732.814: 87.6721% ( 4) 00:11:21.640 28732.814 - 28847.287: 87.7251% ( 4) 00:11:21.640 28847.287 - 28961.761: 87.7383% ( 1) 00:11:21.640 28961.761 - 29076.234: 87.7648% ( 2) 00:11:21.640 29076.234 - 29190.707: 87.7781% ( 1) 00:11:21.640 29190.707 - 29305.181: 87.7913% ( 1) 00:11:21.640 29305.181 - 29534.128: 87.8310% ( 3) 00:11:21.640 29534.128 - 29763.074: 87.8575% ( 2) 00:11:21.640 29763.074 - 29992.021: 87.8840% ( 2) 00:11:21.640 29992.021 - 30220.968: 87.9237% ( 3) 00:11:21.640 30220.968 - 30449.914: 87.9635% ( 3) 00:11:21.640 30449.914 - 30678.861: 87.9899% ( 2) 00:11:21.640 30678.861 - 30907.808: 88.0429% ( 4) 00:11:21.640 30907.808 - 31136.755: 88.1356% ( 7) 00:11:21.640 31136.755 - 31365.701: 88.3210% ( 14) 00:11:21.640 31365.701 - 31594.648: 88.6388% ( 24) 00:11:21.640 31594.648 - 31823.595: 88.9831% ( 26) 00:11:21.640 31823.595 - 32052.541: 89.2744% ( 22) 00:11:21.640 32052.541 - 32281.488: 89.6319% ( 27) 00:11:21.640 32281.488 - 32510.435: 90.0291% ( 30) 00:11:21.640 32510.435 - 32739.382: 90.4529% ( 32) 00:11:21.640 32739.382 - 32968.328: 91.0090% ( 42) 00:11:21.640 32968.328 - 33197.275: 91.4857% ( 36) 00:11:21.640 33197.275 - 33426.222: 91.9227% ( 33) 00:11:21.640 33426.222 - 33655.169: 92.3994% ( 36) 00:11:21.640 33655.169 - 33884.115: 93.1806% ( 59) 00:11:21.640 33884.115 - 34113.062: 93.6970% ( 39) 00:11:21.640 34113.062 - 34342.009: 94.2532% ( 42) 00:11:21.640 34342.009 - 34570.955: 94.7696% ( 39) 00:11:21.640 34570.955 - 34799.902: 95.7627% ( 75) 00:11:21.640 34799.902 - 35028.849: 96.1997% ( 33) 00:11:21.640 35028.849 - 35257.796: 96.5837% ( 29) 00:11:21.640 35257.796 - 35486.742: 97.0207% ( 33) 00:11:21.640 35486.742 - 35715.689: 97.4576% ( 33) 00:11:21.640 35715.689 - 35944.636: 97.8416% ( 29) 00:11:21.640 35944.636 - 36173.583: 98.0932% ( 19) 00:11:21.640 36173.583 - 36402.529: 98.3448% ( 19) 00:11:21.640 36402.529 - 36631.476: 98.5302% ( 14) 00:11:21.640 36631.476 - 36860.423: 98.7023% ( 13) 00:11:21.640 36860.423 - 37089.369: 98.8347% ( 10) 00:11:21.640 37089.369 - 37318.316: 98.9804% ( 11) 00:11:21.640 37318.316 - 37547.263: 99.0599% ( 6) 00:11:21.640 37547.263 - 37776.210: 99.0996% ( 3) 00:11:21.640 37776.210 - 38005.156: 99.1393% ( 3) 00:11:21.640 38005.156 - 38234.103: 99.1525% ( 1) 00:11:21.640 42813.038 - 43041.984: 99.2188% ( 5) 00:11:21.640 43041.984 - 43270.931: 99.2850% ( 5) 00:11:21.640 43270.931 - 43499.878: 99.3644% ( 6) 00:11:21.640 43499.878 - 43728.824: 99.4571% ( 7) 00:11:21.640 43728.824 - 43957.771: 99.5365% ( 6) 00:11:21.640 43957.771 - 44186.718: 99.6160% ( 6) 00:11:21.640 44186.718 - 44415.665: 99.7087% ( 7) 00:11:21.640 44415.665 - 44644.611: 99.7881% ( 6) 00:11:21.640 44644.611 - 44873.558: 99.8676% ( 6) 00:11:21.640 44873.558 - 45102.505: 99.9470% ( 6) 00:11:21.640 45102.505 - 45331.452: 100.0000% ( 4) 00:11:21.640 00:11:21.640 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:21.640 ============================================================================== 00:11:21.640 Range in us Cumulative IO count 00:11:21.640 9672.999 - 9730.236: 0.0397% ( 3) 00:11:21.640 9730.236 - 9787.472: 0.0927% ( 4) 00:11:21.640 9787.472 - 9844.709: 0.1589% ( 5) 00:11:21.640 9844.709 - 9901.946: 0.2516% ( 7) 00:11:21.640 9901.946 - 9959.183: 0.4237% ( 13) 00:11:21.640 9959.183 - 10016.419: 0.6621% ( 18) 00:11:21.640 10016.419 - 10073.656: 0.8077% ( 11) 00:11:21.640 10073.656 - 10130.893: 0.9534% ( 11) 00:11:21.640 10130.893 - 10188.129: 1.0990% ( 11) 00:11:21.640 10188.129 - 10245.366: 1.3242% ( 17) 00:11:21.640 10245.366 - 10302.603: 1.4301% ( 8) 00:11:21.640 10302.603 - 10359.839: 1.5095% ( 6) 00:11:21.640 10359.839 - 10417.076: 1.5757% ( 5) 00:11:21.641 10417.076 - 10474.313: 1.6287% ( 4) 00:11:21.641 10474.313 - 10531.549: 1.6684% ( 3) 00:11:21.641 10531.549 - 10588.786: 1.7346% ( 5) 00:11:21.641 10588.786 - 10646.023: 1.7744% ( 3) 00:11:21.641 10646.023 - 10703.259: 1.8803% ( 8) 00:11:21.641 10703.259 - 10760.496: 1.9995% ( 9) 00:11:21.641 10760.496 - 10817.733: 2.1451% ( 11) 00:11:21.641 10817.733 - 10874.969: 2.4232% ( 21) 00:11:21.641 10874.969 - 10932.206: 2.7013% ( 21) 00:11:21.641 10932.206 - 10989.443: 3.1250% ( 32) 00:11:21.641 10989.443 - 11046.679: 3.4958% ( 28) 00:11:21.641 11046.679 - 11103.916: 3.7341% ( 18) 00:11:21.641 11103.916 - 11161.153: 4.1711% ( 33) 00:11:21.641 11161.153 - 11218.390: 4.8464% ( 51) 00:11:21.641 11218.390 - 11275.626: 5.7733% ( 70) 00:11:21.641 11275.626 - 11332.863: 6.7929% ( 77) 00:11:21.641 11332.863 - 11390.100: 7.7595% ( 73) 00:11:21.641 11390.100 - 11447.336: 8.8453% ( 82) 00:11:21.641 11447.336 - 11504.573: 9.9576% ( 84) 00:11:21.641 11504.573 - 11561.810: 11.0832% ( 85) 00:11:21.641 11561.810 - 11619.046: 12.0498% ( 73) 00:11:21.641 11619.046 - 11676.283: 12.8840% ( 63) 00:11:21.641 11676.283 - 11733.520: 13.5196% ( 48) 00:11:21.641 11733.520 - 11790.756: 14.2479% ( 55) 00:11:21.641 11790.756 - 11847.993: 15.0159% ( 58) 00:11:21.641 11847.993 - 11905.230: 15.7177% ( 53) 00:11:21.641 11905.230 - 11962.466: 16.4195% ( 53) 00:11:21.641 11962.466 - 12019.703: 17.4523% ( 78) 00:11:21.641 12019.703 - 12076.940: 18.3792% ( 70) 00:11:21.641 12076.940 - 12134.176: 19.0943% ( 54) 00:11:21.641 12134.176 - 12191.413: 19.8888% ( 60) 00:11:21.641 12191.413 - 12248.650: 20.8289% ( 71) 00:11:21.641 12248.650 - 12305.886: 21.9147% ( 82) 00:11:21.641 12305.886 - 12363.123: 22.9343% ( 77) 00:11:21.641 12363.123 - 12420.360: 23.8083% ( 66) 00:11:21.641 12420.360 - 12477.597: 24.5101% ( 53) 00:11:21.641 12477.597 - 12534.833: 25.2648% ( 57) 00:11:21.641 12534.833 - 12592.070: 25.9931% ( 55) 00:11:21.641 12592.070 - 12649.307: 26.9730% ( 74) 00:11:21.641 12649.307 - 12706.543: 27.8204% ( 64) 00:11:21.641 12706.543 - 12763.780: 28.8533% ( 78) 00:11:21.641 12763.780 - 12821.017: 29.7405% ( 67) 00:11:21.641 12821.017 - 12878.253: 30.5482% ( 61) 00:11:21.641 12878.253 - 12935.490: 31.2632% ( 54) 00:11:21.641 12935.490 - 12992.727: 31.7929% ( 40) 00:11:21.641 12992.727 - 13049.963: 32.3755% ( 44) 00:11:21.641 13049.963 - 13107.200: 32.9846% ( 46) 00:11:21.641 13107.200 - 13164.437: 33.5011% ( 39) 00:11:21.641 13164.437 - 13221.673: 34.0175% ( 39) 00:11:21.641 13221.673 - 13278.910: 34.4809% ( 35) 00:11:21.641 13278.910 - 13336.147: 35.0503% ( 43) 00:11:21.641 13336.147 - 13393.383: 35.6594% ( 46) 00:11:21.641 13393.383 - 13450.620: 36.1626% ( 38) 00:11:21.641 13450.620 - 13507.857: 36.7850% ( 47) 00:11:21.641 13507.857 - 13565.093: 37.3146% ( 40) 00:11:21.641 13565.093 - 13622.330: 38.2283% ( 69) 00:11:21.641 13622.330 - 13679.567: 39.0360% ( 61) 00:11:21.641 13679.567 - 13736.803: 40.0821% ( 79) 00:11:21.641 13736.803 - 13794.040: 40.9825% ( 68) 00:11:21.641 13794.040 - 13851.277: 41.9756% ( 75) 00:11:21.641 13851.277 - 13908.514: 42.7966% ( 62) 00:11:21.641 13908.514 - 13965.750: 43.6043% ( 61) 00:11:21.641 13965.750 - 14022.987: 44.3194% ( 54) 00:11:21.641 14022.987 - 14080.224: 44.9285% ( 46) 00:11:21.641 14080.224 - 14137.460: 45.4979% ( 43) 00:11:21.641 14137.460 - 14194.697: 45.9746% ( 36) 00:11:21.641 14194.697 - 14251.934: 46.6102% ( 48) 00:11:21.641 14251.934 - 14309.170: 47.2458% ( 48) 00:11:21.641 14309.170 - 14366.407: 47.8284% ( 44) 00:11:21.641 14366.407 - 14423.644: 48.2521% ( 32) 00:11:21.641 14423.644 - 14480.880: 48.7818% ( 40) 00:11:21.641 14480.880 - 14538.117: 49.2055% ( 32) 00:11:21.641 14538.117 - 14595.354: 49.9338% ( 55) 00:11:21.641 14595.354 - 14652.590: 50.4899% ( 42) 00:11:21.641 14652.590 - 14767.064: 51.7214% ( 93) 00:11:21.641 14767.064 - 14881.537: 52.8602% ( 86) 00:11:21.641 14881.537 - 14996.010: 54.1578% ( 98) 00:11:21.641 14996.010 - 15110.484: 55.3628% ( 91) 00:11:21.641 15110.484 - 15224.957: 56.4354% ( 81) 00:11:21.641 15224.957 - 15339.431: 57.9714% ( 116) 00:11:21.641 15339.431 - 15453.904: 59.9974% ( 153) 00:11:21.641 15453.904 - 15568.377: 62.0101% ( 152) 00:11:21.641 15568.377 - 15682.851: 63.6388% ( 123) 00:11:21.641 15682.851 - 15797.324: 65.5058% ( 141) 00:11:21.641 15797.324 - 15911.797: 66.8565% ( 102) 00:11:21.641 15911.797 - 16026.271: 67.9820% ( 85) 00:11:21.641 16026.271 - 16140.744: 69.2929% ( 99) 00:11:21.641 16140.744 - 16255.217: 70.1933% ( 68) 00:11:21.641 16255.217 - 16369.691: 71.5175% ( 100) 00:11:21.641 16369.691 - 16484.164: 72.3385% ( 62) 00:11:21.641 16484.164 - 16598.638: 73.0932% ( 57) 00:11:21.641 16598.638 - 16713.111: 73.9407% ( 64) 00:11:21.641 16713.111 - 16827.584: 74.6557% ( 54) 00:11:21.641 16827.584 - 16942.058: 75.5561% ( 68) 00:11:21.641 16942.058 - 17056.531: 76.2712% ( 54) 00:11:21.641 17056.531 - 17171.004: 76.7479% ( 36) 00:11:21.641 17171.004 - 17285.478: 77.1849% ( 33) 00:11:21.641 17285.478 - 17399.951: 77.6351% ( 34) 00:11:21.641 17399.951 - 17514.424: 78.1515% ( 39) 00:11:21.641 17514.424 - 17628.898: 78.4958% ( 26) 00:11:21.641 17628.898 - 17743.371: 78.8136% ( 24) 00:11:21.641 17743.371 - 17857.845: 79.1181% ( 23) 00:11:21.641 17857.845 - 17972.318: 79.4359% ( 24) 00:11:21.641 17972.318 - 18086.791: 79.8067% ( 28) 00:11:21.641 18086.791 - 18201.265: 80.2436% ( 33) 00:11:21.641 18201.265 - 18315.738: 80.6939% ( 34) 00:11:21.641 18315.738 - 18430.211: 81.1970% ( 38) 00:11:21.641 18430.211 - 18544.685: 81.5546% ( 27) 00:11:21.641 18544.685 - 18659.158: 81.8856% ( 25) 00:11:21.641 18659.158 - 18773.631: 82.2034% ( 24) 00:11:21.641 18773.631 - 18888.105: 82.4020% ( 15) 00:11:21.641 18888.105 - 19002.578: 82.6006% ( 15) 00:11:21.641 19002.578 - 19117.052: 82.6668% ( 5) 00:11:21.641 19117.052 - 19231.525: 82.7463% ( 6) 00:11:21.641 19231.525 - 19345.998: 82.8787% ( 10) 00:11:21.641 19345.998 - 19460.472: 83.0508% ( 13) 00:11:21.641 19460.472 - 19574.945: 83.1435% ( 7) 00:11:21.641 19574.945 - 19689.418: 83.2627% ( 9) 00:11:21.641 19689.418 - 19803.892: 83.3686% ( 8) 00:11:21.641 19803.892 - 19918.365: 83.4746% ( 8) 00:11:21.641 19918.365 - 20032.838: 83.5805% ( 8) 00:11:21.641 20032.838 - 20147.312: 83.6600% ( 6) 00:11:21.641 20147.312 - 20261.785: 83.7791% ( 9) 00:11:21.641 20261.785 - 20376.259: 83.8851% ( 8) 00:11:21.641 20376.259 - 20490.732: 84.0042% ( 9) 00:11:21.641 20490.732 - 20605.205: 84.1102% ( 8) 00:11:21.641 20605.205 - 20719.679: 84.2161% ( 8) 00:11:21.641 20719.679 - 20834.152: 84.2691% ( 4) 00:11:21.641 20834.152 - 20948.625: 84.3088% ( 3) 00:11:21.641 20948.625 - 21063.099: 84.3618% ( 4) 00:11:21.641 21063.099 - 21177.572: 84.4015% ( 3) 00:11:21.641 21177.572 - 21292.045: 84.4809% ( 6) 00:11:21.641 21292.045 - 21406.519: 84.5604% ( 6) 00:11:21.641 21406.519 - 21520.992: 84.6398% ( 6) 00:11:21.641 21520.992 - 21635.466: 84.7193% ( 6) 00:11:21.641 21635.466 - 21749.939: 84.8120% ( 7) 00:11:21.641 21749.939 - 21864.412: 84.8914% ( 6) 00:11:21.641 21864.412 - 21978.886: 84.9841% ( 7) 00:11:21.641 21978.886 - 22093.359: 85.0238% ( 3) 00:11:21.641 22093.359 - 22207.832: 85.0636% ( 3) 00:11:21.641 22207.832 - 22322.306: 85.0900% ( 2) 00:11:21.641 22322.306 - 22436.779: 85.1298% ( 3) 00:11:21.641 22436.779 - 22551.252: 85.1562% ( 2) 00:11:21.641 22551.252 - 22665.726: 85.1827% ( 2) 00:11:21.641 22665.726 - 22780.199: 85.2092% ( 2) 00:11:21.641 22780.199 - 22894.672: 85.2357% ( 2) 00:11:21.641 22894.672 - 23009.146: 85.2887% ( 4) 00:11:21.641 23009.146 - 23123.619: 85.3151% ( 2) 00:11:21.641 23123.619 - 23238.093: 85.3416% ( 2) 00:11:21.641 23238.093 - 23352.566: 85.3814% ( 3) 00:11:21.641 23352.566 - 23467.039: 85.4078% ( 2) 00:11:21.641 23467.039 - 23581.513: 85.4476% ( 3) 00:11:21.641 23581.513 - 23695.986: 85.4608% ( 1) 00:11:21.641 23695.986 - 23810.459: 85.5005% ( 3) 00:11:21.641 23810.459 - 23924.933: 85.5403% ( 3) 00:11:21.641 23924.933 - 24039.406: 85.5535% ( 1) 00:11:21.641 24039.406 - 24153.879: 85.5800% ( 2) 00:11:21.641 24153.879 - 24268.353: 85.5932% ( 1) 00:11:21.641 24611.773 - 24726.246: 85.6065% ( 1) 00:11:21.641 24955.193 - 25069.666: 85.6594% ( 4) 00:11:21.641 25069.666 - 25184.140: 85.7256% ( 5) 00:11:21.641 25184.140 - 25298.613: 85.7786% ( 4) 00:11:21.641 25298.613 - 25413.086: 85.7918% ( 1) 00:11:21.641 25413.086 - 25527.560: 85.8051% ( 1) 00:11:21.641 25527.560 - 25642.033: 85.8316% ( 2) 00:11:21.641 25642.033 - 25756.507: 85.8448% ( 1) 00:11:21.641 25756.507 - 25870.980: 85.8581% ( 1) 00:11:21.641 25870.980 - 25985.453: 85.8845% ( 2) 00:11:21.641 25985.453 - 26099.927: 85.8978% ( 1) 00:11:21.641 26099.927 - 26214.400: 85.9243% ( 2) 00:11:21.641 26214.400 - 26328.873: 85.9375% ( 1) 00:11:21.641 26328.873 - 26443.347: 85.9640% ( 2) 00:11:21.641 26443.347 - 26557.820: 85.9772% ( 1) 00:11:21.641 26557.820 - 26672.293: 86.0037% ( 2) 00:11:21.641 26672.293 - 26786.767: 86.0302% ( 2) 00:11:21.641 26786.767 - 26901.240: 86.0434% ( 1) 00:11:21.641 26901.240 - 27015.714: 86.0832% ( 3) 00:11:21.641 27015.714 - 27130.187: 86.1891% ( 8) 00:11:21.641 27130.187 - 27244.660: 86.3347% ( 11) 00:11:21.641 27244.660 - 27359.134: 86.4539% ( 9) 00:11:21.641 27359.134 - 27473.607: 86.5334% ( 6) 00:11:21.641 27473.607 - 27588.080: 86.6128% ( 6) 00:11:21.641 27588.080 - 27702.554: 86.6923% ( 6) 00:11:21.641 27702.554 - 27817.027: 86.7452% ( 4) 00:11:21.641 27817.027 - 27931.500: 86.7982% ( 4) 00:11:21.641 27931.500 - 28045.974: 86.8776% ( 6) 00:11:21.641 28045.974 - 28160.447: 86.9571% ( 6) 00:11:21.641 28160.447 - 28274.921: 87.0233% ( 5) 00:11:21.641 28274.921 - 28389.394: 87.0895% ( 5) 00:11:21.642 28389.394 - 28503.867: 87.1690% ( 6) 00:11:21.642 28503.867 - 28618.341: 87.2484% ( 6) 00:11:21.642 28618.341 - 28732.814: 87.3279% ( 6) 00:11:21.642 28732.814 - 28847.287: 87.3941% ( 5) 00:11:21.642 28847.287 - 28961.761: 87.4338% ( 3) 00:11:21.642 28961.761 - 29076.234: 87.5000% ( 5) 00:11:21.642 29076.234 - 29190.707: 87.5662% ( 5) 00:11:21.642 29190.707 - 29305.181: 87.6986% ( 10) 00:11:21.642 29305.181 - 29534.128: 87.8575% ( 12) 00:11:21.642 29534.128 - 29763.074: 87.9370% ( 6) 00:11:21.642 29763.074 - 29992.021: 87.9767% ( 3) 00:11:21.642 29992.021 - 30220.968: 88.0164% ( 3) 00:11:21.642 30220.968 - 30449.914: 88.0694% ( 4) 00:11:21.642 30449.914 - 30678.861: 88.1091% ( 3) 00:11:21.642 30678.861 - 30907.808: 88.2415% ( 10) 00:11:21.642 30907.808 - 31136.755: 88.4004% ( 12) 00:11:21.642 31136.755 - 31365.701: 88.5990% ( 15) 00:11:21.642 31365.701 - 31594.648: 88.8506% ( 19) 00:11:21.642 31594.648 - 31823.595: 89.0095% ( 12) 00:11:21.642 31823.595 - 32052.541: 89.1155% ( 8) 00:11:21.642 32052.541 - 32281.488: 89.3803% ( 20) 00:11:21.642 32281.488 - 32510.435: 89.7511% ( 28) 00:11:21.642 32510.435 - 32739.382: 90.2013% ( 34) 00:11:21.642 32739.382 - 32968.328: 90.7044% ( 38) 00:11:21.642 32968.328 - 33197.275: 91.3533% ( 49) 00:11:21.642 33197.275 - 33426.222: 92.0286% ( 51) 00:11:21.642 33426.222 - 33655.169: 92.6774% ( 49) 00:11:21.642 33655.169 - 33884.115: 93.4057% ( 55) 00:11:21.642 33884.115 - 34113.062: 94.0546% ( 49) 00:11:21.642 34113.062 - 34342.009: 94.7431% ( 52) 00:11:21.642 34342.009 - 34570.955: 95.5508% ( 61) 00:11:21.642 34570.955 - 34799.902: 96.0805% ( 40) 00:11:21.642 34799.902 - 35028.849: 96.4778% ( 30) 00:11:21.642 35028.849 - 35257.796: 96.9809% ( 38) 00:11:21.642 35257.796 - 35486.742: 97.3649% ( 29) 00:11:21.642 35486.742 - 35715.689: 97.7092% ( 26) 00:11:21.642 35715.689 - 35944.636: 97.9873% ( 21) 00:11:21.642 35944.636 - 36173.583: 98.1197% ( 10) 00:11:21.642 36173.583 - 36402.529: 98.2918% ( 13) 00:11:21.642 36402.529 - 36631.476: 98.4507% ( 12) 00:11:21.642 36631.476 - 36860.423: 98.6361% ( 14) 00:11:21.642 36860.423 - 37089.369: 98.8215% ( 14) 00:11:21.642 37089.369 - 37318.316: 99.0201% ( 15) 00:11:21.642 37318.316 - 37547.263: 99.0996% ( 6) 00:11:21.642 37547.263 - 37776.210: 99.1393% ( 3) 00:11:21.642 37776.210 - 38005.156: 99.1525% ( 1) 00:11:21.642 41439.357 - 41668.304: 99.1790% ( 2) 00:11:21.642 41668.304 - 41897.251: 99.2585% ( 6) 00:11:21.642 41897.251 - 42126.197: 99.3247% ( 5) 00:11:21.642 42126.197 - 42355.144: 99.4174% ( 7) 00:11:21.642 42355.144 - 42584.091: 99.4968% ( 6) 00:11:21.642 42584.091 - 42813.038: 99.5763% ( 6) 00:11:21.642 42813.038 - 43041.984: 99.6557% ( 6) 00:11:21.642 43041.984 - 43270.931: 99.7352% ( 6) 00:11:21.642 43270.931 - 43499.878: 99.8279% ( 7) 00:11:21.642 43499.878 - 43728.824: 99.9073% ( 6) 00:11:21.642 43728.824 - 43957.771: 99.9868% ( 6) 00:11:21.642 43957.771 - 44186.718: 100.0000% ( 1) 00:11:21.642 00:11:21.642 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:21.642 ============================================================================== 00:11:21.642 Range in us Cumulative IO count 00:11:21.642 9730.236 - 9787.472: 0.0132% ( 1) 00:11:21.642 9787.472 - 9844.709: 0.0662% ( 4) 00:11:21.642 9844.709 - 9901.946: 0.1192% ( 4) 00:11:21.642 9901.946 - 9959.183: 0.1721% ( 4) 00:11:21.642 9959.183 - 10016.419: 0.2251% ( 4) 00:11:21.642 10016.419 - 10073.656: 0.2913% ( 5) 00:11:21.642 10073.656 - 10130.893: 0.3310% ( 3) 00:11:21.642 10130.893 - 10188.129: 0.4105% ( 6) 00:11:21.642 10188.129 - 10245.366: 0.5032% ( 7) 00:11:21.642 10245.366 - 10302.603: 0.5959% ( 7) 00:11:21.642 10302.603 - 10359.839: 0.7150% ( 9) 00:11:21.642 10359.839 - 10417.076: 0.8607% ( 11) 00:11:21.642 10417.076 - 10474.313: 1.1785% ( 24) 00:11:21.642 10474.313 - 10531.549: 1.3771% ( 15) 00:11:21.642 10531.549 - 10588.786: 1.4963% ( 9) 00:11:21.642 10588.786 - 10646.023: 1.7214% ( 17) 00:11:21.642 10646.023 - 10703.259: 1.9333% ( 16) 00:11:21.642 10703.259 - 10760.496: 2.2908% ( 27) 00:11:21.642 10760.496 - 10817.733: 2.5026% ( 16) 00:11:21.642 10817.733 - 10874.969: 2.7278% ( 17) 00:11:21.642 10874.969 - 10932.206: 3.0058% ( 21) 00:11:21.642 10932.206 - 10989.443: 3.4560% ( 34) 00:11:21.642 10989.443 - 11046.679: 4.0784% ( 47) 00:11:21.642 11046.679 - 11103.916: 4.7802% ( 53) 00:11:21.642 11103.916 - 11161.153: 5.5747% ( 60) 00:11:21.642 11161.153 - 11218.390: 6.4883% ( 69) 00:11:21.642 11218.390 - 11275.626: 7.4153% ( 70) 00:11:21.642 11275.626 - 11332.863: 8.1303% ( 54) 00:11:21.642 11332.863 - 11390.100: 8.8983% ( 58) 00:11:21.642 11390.100 - 11447.336: 9.6928% ( 60) 00:11:21.642 11447.336 - 11504.573: 10.4608% ( 58) 00:11:21.642 11504.573 - 11561.810: 11.6790% ( 92) 00:11:21.642 11561.810 - 11619.046: 12.5397% ( 65) 00:11:21.642 11619.046 - 11676.283: 13.3739% ( 63) 00:11:21.642 11676.283 - 11733.520: 14.3538% ( 74) 00:11:21.642 11733.520 - 11790.756: 14.9762% ( 47) 00:11:21.642 11790.756 - 11847.993: 15.8898% ( 69) 00:11:21.642 11847.993 - 11905.230: 16.7770% ( 67) 00:11:21.642 11905.230 - 11962.466: 17.4523% ( 51) 00:11:21.642 11962.466 - 12019.703: 18.1806% ( 55) 00:11:21.642 12019.703 - 12076.940: 19.0810% ( 68) 00:11:21.642 12076.940 - 12134.176: 19.8490% ( 58) 00:11:21.642 12134.176 - 12191.413: 20.6833% ( 63) 00:11:21.642 12191.413 - 12248.650: 21.2262% ( 41) 00:11:21.642 12248.650 - 12305.886: 21.7426% ( 39) 00:11:21.642 12305.886 - 12363.123: 22.2722% ( 40) 00:11:21.642 12363.123 - 12420.360: 22.7754% ( 38) 00:11:21.642 12420.360 - 12477.597: 23.3845% ( 46) 00:11:21.642 12477.597 - 12534.833: 23.7950% ( 31) 00:11:21.642 12534.833 - 12592.070: 24.6160% ( 62) 00:11:21.642 12592.070 - 12649.307: 25.6091% ( 75) 00:11:21.642 12649.307 - 12706.543: 26.3109% ( 53) 00:11:21.642 12706.543 - 12763.780: 26.8803% ( 43) 00:11:21.642 12763.780 - 12821.017: 27.5821% ( 53) 00:11:21.642 12821.017 - 12878.253: 28.3501% ( 58) 00:11:21.642 12878.253 - 12935.490: 29.0519% ( 53) 00:11:21.642 12935.490 - 12992.727: 29.7669% ( 54) 00:11:21.642 12992.727 - 13049.963: 30.5217% ( 57) 00:11:21.642 13049.963 - 13107.200: 31.2765% ( 57) 00:11:21.642 13107.200 - 13164.437: 32.0445% ( 58) 00:11:21.642 13164.437 - 13221.673: 32.8125% ( 58) 00:11:21.642 13221.673 - 13278.910: 33.7262% ( 69) 00:11:21.642 13278.910 - 13336.147: 34.7987% ( 81) 00:11:21.642 13336.147 - 13393.383: 35.9243% ( 85) 00:11:21.642 13393.383 - 13450.620: 37.2352% ( 99) 00:11:21.642 13450.620 - 13507.857: 38.3475% ( 84) 00:11:21.642 13507.857 - 13565.093: 39.2876% ( 71) 00:11:21.642 13565.093 - 13622.330: 40.0821% ( 60) 00:11:21.642 13622.330 - 13679.567: 40.7574% ( 51) 00:11:21.642 13679.567 - 13736.803: 41.5784% ( 62) 00:11:21.642 13736.803 - 13794.040: 42.2669% ( 52) 00:11:21.642 13794.040 - 13851.277: 42.9290% ( 50) 00:11:21.642 13851.277 - 13908.514: 43.5381% ( 46) 00:11:21.642 13908.514 - 13965.750: 44.2267% ( 52) 00:11:21.642 13965.750 - 14022.987: 44.8358% ( 46) 00:11:21.642 14022.987 - 14080.224: 45.4317% ( 45) 00:11:21.642 14080.224 - 14137.460: 45.8422% ( 31) 00:11:21.642 14137.460 - 14194.697: 46.2129% ( 28) 00:11:21.642 14194.697 - 14251.934: 46.5704% ( 27) 00:11:21.642 14251.934 - 14309.170: 47.0736% ( 38) 00:11:21.642 14309.170 - 14366.407: 47.4974% ( 32) 00:11:21.642 14366.407 - 14423.644: 47.8946% ( 30) 00:11:21.642 14423.644 - 14480.880: 48.3051% ( 31) 00:11:21.642 14480.880 - 14538.117: 48.8215% ( 39) 00:11:21.642 14538.117 - 14595.354: 49.3114% ( 37) 00:11:21.642 14595.354 - 14652.590: 49.8941% ( 44) 00:11:21.642 14652.590 - 14767.064: 51.2315% ( 101) 00:11:21.642 14767.064 - 14881.537: 52.4497% ( 92) 00:11:21.642 14881.537 - 14996.010: 53.9062% ( 110) 00:11:21.642 14996.010 - 15110.484: 55.2039% ( 98) 00:11:21.642 15110.484 - 15224.957: 56.7002% ( 113) 00:11:21.643 15224.957 - 15339.431: 58.5275% ( 138) 00:11:21.643 15339.431 - 15453.904: 60.4608% ( 146) 00:11:21.643 15453.904 - 15568.377: 62.3676% ( 144) 00:11:21.643 15568.377 - 15682.851: 64.5922% ( 168) 00:11:21.643 15682.851 - 15797.324: 65.7707% ( 89) 00:11:21.643 15797.324 - 15911.797: 66.8962% ( 85) 00:11:21.643 15911.797 - 16026.271: 67.7966% ( 68) 00:11:21.643 16026.271 - 16140.744: 68.8559% ( 80) 00:11:21.643 16140.744 - 16255.217: 69.8358% ( 74) 00:11:21.643 16255.217 - 16369.691: 71.1732% ( 101) 00:11:21.643 16369.691 - 16484.164: 71.9280% ( 57) 00:11:21.643 16484.164 - 16598.638: 72.6298% ( 53) 00:11:21.643 16598.638 - 16713.111: 73.2918% ( 50) 00:11:21.643 16713.111 - 16827.584: 74.1658% ( 66) 00:11:21.643 16827.584 - 16942.058: 75.0397% ( 66) 00:11:21.643 16942.058 - 17056.531: 76.0196% ( 74) 00:11:21.643 17056.531 - 17171.004: 76.6022% ( 44) 00:11:21.643 17171.004 - 17285.478: 77.0922% ( 37) 00:11:21.643 17285.478 - 17399.951: 77.5556% ( 35) 00:11:21.643 17399.951 - 17514.424: 78.0191% ( 35) 00:11:21.643 17514.424 - 17628.898: 78.4296% ( 31) 00:11:21.643 17628.898 - 17743.371: 78.7341% ( 23) 00:11:21.643 17743.371 - 17857.845: 79.1711% ( 33) 00:11:21.643 17857.845 - 17972.318: 79.6081% ( 33) 00:11:21.643 17972.318 - 18086.791: 79.9656% ( 27) 00:11:21.643 18086.791 - 18201.265: 80.2172% ( 19) 00:11:21.643 18201.265 - 18315.738: 80.4158% ( 15) 00:11:21.643 18315.738 - 18430.211: 80.6409% ( 17) 00:11:21.643 18430.211 - 18544.685: 80.8660% ( 17) 00:11:21.643 18544.685 - 18659.158: 81.0514% ( 14) 00:11:21.643 18659.158 - 18773.631: 81.2765% ( 17) 00:11:21.643 18773.631 - 18888.105: 81.4221% ( 11) 00:11:21.643 18888.105 - 19002.578: 81.5281% ( 8) 00:11:21.643 19002.578 - 19117.052: 81.7532% ( 17) 00:11:21.643 19117.052 - 19231.525: 82.0312% ( 21) 00:11:21.643 19231.525 - 19345.998: 82.4815% ( 34) 00:11:21.643 19345.998 - 19460.472: 82.9184% ( 33) 00:11:21.643 19460.472 - 19574.945: 83.2627% ( 26) 00:11:21.643 19574.945 - 19689.418: 83.4878% ( 17) 00:11:21.643 19689.418 - 19803.892: 83.6997% ( 16) 00:11:21.643 19803.892 - 19918.365: 83.9248% ( 17) 00:11:21.643 19918.365 - 20032.838: 84.1234% ( 15) 00:11:21.643 20032.838 - 20147.312: 84.3220% ( 15) 00:11:21.643 20147.312 - 20261.785: 84.4677% ( 11) 00:11:21.643 20261.785 - 20376.259: 84.6001% ( 10) 00:11:21.643 20376.259 - 20490.732: 84.6928% ( 7) 00:11:21.643 20490.732 - 20605.205: 84.7458% ( 4) 00:11:21.643 22894.672 - 23009.146: 84.7590% ( 1) 00:11:21.643 23009.146 - 23123.619: 84.7855% ( 2) 00:11:21.643 23123.619 - 23238.093: 84.8252% ( 3) 00:11:21.643 23238.093 - 23352.566: 84.8385% ( 1) 00:11:21.643 23352.566 - 23467.039: 84.8782% ( 3) 00:11:21.643 23467.039 - 23581.513: 84.9047% ( 2) 00:11:21.643 23581.513 - 23695.986: 84.9444% ( 3) 00:11:21.643 23695.986 - 23810.459: 84.9709% ( 2) 00:11:21.643 23810.459 - 23924.933: 84.9974% ( 2) 00:11:21.643 23924.933 - 24039.406: 85.0371% ( 3) 00:11:21.643 24039.406 - 24153.879: 85.0768% ( 3) 00:11:21.643 24153.879 - 24268.353: 85.2357% ( 12) 00:11:21.643 24268.353 - 24382.826: 85.3416% ( 8) 00:11:21.643 24382.826 - 24497.300: 85.4078% ( 5) 00:11:21.643 24497.300 - 24611.773: 85.4608% ( 4) 00:11:21.643 24611.773 - 24726.246: 85.5138% ( 4) 00:11:21.643 24726.246 - 24840.720: 85.5800% ( 5) 00:11:21.643 24840.720 - 24955.193: 85.6992% ( 9) 00:11:21.643 24955.193 - 25069.666: 85.7786% ( 6) 00:11:21.643 25069.666 - 25184.140: 85.8845% ( 8) 00:11:21.643 25184.140 - 25298.613: 85.9640% ( 6) 00:11:21.643 25298.613 - 25413.086: 86.0037% ( 3) 00:11:21.643 25413.086 - 25527.560: 86.0302% ( 2) 00:11:21.643 25527.560 - 25642.033: 86.0567% ( 2) 00:11:21.643 25642.033 - 25756.507: 86.0832% ( 2) 00:11:21.643 25756.507 - 25870.980: 86.0964% ( 1) 00:11:21.643 25870.980 - 25985.453: 86.1229% ( 2) 00:11:21.643 25985.453 - 26099.927: 86.1494% ( 2) 00:11:21.643 26099.927 - 26214.400: 86.1891% ( 3) 00:11:21.643 26214.400 - 26328.873: 86.2553% ( 5) 00:11:21.643 26328.873 - 26443.347: 86.3612% ( 8) 00:11:21.643 26443.347 - 26557.820: 86.4407% ( 6) 00:11:21.643 26557.820 - 26672.293: 86.5599% ( 9) 00:11:21.643 26672.293 - 26786.767: 86.6790% ( 9) 00:11:21.643 26786.767 - 26901.240: 86.7320% ( 4) 00:11:21.643 26901.240 - 27015.714: 86.7850% ( 4) 00:11:21.643 27015.714 - 27130.187: 86.8114% ( 2) 00:11:21.643 27130.187 - 27244.660: 86.8644% ( 4) 00:11:21.643 27244.660 - 27359.134: 86.9041% ( 3) 00:11:21.643 27359.134 - 27473.607: 86.9439% ( 3) 00:11:21.643 27473.607 - 27588.080: 86.9968% ( 4) 00:11:21.643 27588.080 - 27702.554: 87.0233% ( 2) 00:11:21.643 27702.554 - 27817.027: 87.0365% ( 1) 00:11:21.643 27817.027 - 27931.500: 87.0498% ( 1) 00:11:21.643 27931.500 - 28045.974: 87.0763% ( 2) 00:11:21.643 28160.447 - 28274.921: 87.1028% ( 2) 00:11:21.643 28274.921 - 28389.394: 87.1292% ( 2) 00:11:21.643 28389.394 - 28503.867: 87.1425% ( 1) 00:11:21.643 28503.867 - 28618.341: 87.1690% ( 2) 00:11:21.643 28618.341 - 28732.814: 87.1822% ( 1) 00:11:21.643 28732.814 - 28847.287: 87.2087% ( 2) 00:11:21.643 28847.287 - 28961.761: 87.2219% ( 1) 00:11:21.643 28961.761 - 29076.234: 87.2484% ( 2) 00:11:21.643 29076.234 - 29190.707: 87.2881% ( 3) 00:11:21.643 29190.707 - 29305.181: 87.3543% ( 5) 00:11:21.643 29305.181 - 29534.128: 87.4073% ( 4) 00:11:21.643 29534.128 - 29763.074: 87.4603% ( 4) 00:11:21.643 29763.074 - 29992.021: 87.5397% ( 6) 00:11:21.643 29992.021 - 30220.968: 87.6324% ( 7) 00:11:21.643 30220.968 - 30449.914: 87.7781% ( 11) 00:11:21.643 30449.914 - 30678.861: 87.9370% ( 12) 00:11:21.643 30678.861 - 30907.808: 88.1753% ( 18) 00:11:21.643 30907.808 - 31136.755: 88.5593% ( 29) 00:11:21.643 31136.755 - 31365.701: 88.9301% ( 28) 00:11:21.643 31365.701 - 31594.648: 89.2082% ( 21) 00:11:21.643 31594.648 - 31823.595: 89.5524% ( 26) 00:11:21.643 31823.595 - 32052.541: 89.8570% ( 23) 00:11:21.643 32052.541 - 32281.488: 90.2278% ( 28) 00:11:21.643 32281.488 - 32510.435: 90.6382% ( 31) 00:11:21.643 32510.435 - 32739.382: 91.1149% ( 36) 00:11:21.643 32739.382 - 32968.328: 91.6049% ( 37) 00:11:21.643 32968.328 - 33197.275: 92.2669% ( 50) 00:11:21.643 33197.275 - 33426.222: 92.8496% ( 44) 00:11:21.643 33426.222 - 33655.169: 93.4587% ( 46) 00:11:21.643 33655.169 - 33884.115: 94.2002% ( 56) 00:11:21.643 33884.115 - 34113.062: 94.7564% ( 42) 00:11:21.643 34113.062 - 34342.009: 95.2728% ( 39) 00:11:21.643 34342.009 - 34570.955: 95.7230% ( 34) 00:11:21.643 34570.955 - 34799.902: 96.1732% ( 34) 00:11:21.643 34799.902 - 35028.849: 96.6896% ( 39) 00:11:21.643 35028.849 - 35257.796: 97.2458% ( 42) 00:11:21.643 35257.796 - 35486.742: 97.6298% ( 29) 00:11:21.643 35486.742 - 35715.689: 97.9476% ( 24) 00:11:21.643 35715.689 - 35944.636: 98.2124% ( 20) 00:11:21.643 35944.636 - 36173.583: 98.3448% ( 10) 00:11:21.643 36173.583 - 36402.529: 98.5169% ( 13) 00:11:21.643 36402.529 - 36631.476: 98.6891% ( 13) 00:11:21.643 36631.476 - 36860.423: 98.8347% ( 11) 00:11:21.643 36860.423 - 37089.369: 98.9274% ( 7) 00:11:21.643 37089.369 - 37318.316: 99.0201% ( 7) 00:11:21.643 37318.316 - 37547.263: 99.0996% ( 6) 00:11:21.643 37547.263 - 37776.210: 99.1525% ( 4) 00:11:21.643 39836.730 - 40065.677: 99.2055% ( 4) 00:11:21.643 40065.677 - 40294.624: 99.2850% ( 6) 00:11:21.643 40294.624 - 40523.570: 99.3776% ( 7) 00:11:21.643 40523.570 - 40752.517: 99.4571% ( 6) 00:11:21.643 40752.517 - 40981.464: 99.5498% ( 7) 00:11:21.643 40981.464 - 41210.410: 99.6292% ( 6) 00:11:21.643 41210.410 - 41439.357: 99.7087% ( 6) 00:11:21.643 41439.357 - 41668.304: 99.7881% ( 6) 00:11:21.643 41668.304 - 41897.251: 99.8808% ( 7) 00:11:21.643 41897.251 - 42126.197: 99.9735% ( 7) 00:11:21.643 42126.197 - 42355.144: 100.0000% ( 2) 00:11:21.643 00:11:21.643 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:21.643 ============================================================================== 00:11:21.643 Range in us Cumulative IO count 00:11:21.643 9787.472 - 9844.709: 0.0662% ( 5) 00:11:21.643 9844.709 - 9901.946: 0.1192% ( 4) 00:11:21.643 9901.946 - 9959.183: 0.2251% ( 8) 00:11:21.643 9959.183 - 10016.419: 0.3178% ( 7) 00:11:21.643 10016.419 - 10073.656: 0.4899% ( 13) 00:11:21.643 10073.656 - 10130.893: 0.5694% ( 6) 00:11:21.643 10130.893 - 10188.129: 0.6488% ( 6) 00:11:21.643 10188.129 - 10245.366: 0.7548% ( 8) 00:11:21.643 10245.366 - 10302.603: 0.8607% ( 8) 00:11:21.643 10302.603 - 10359.839: 1.0461% ( 14) 00:11:21.643 10359.839 - 10417.076: 1.2447% ( 15) 00:11:21.643 10417.076 - 10474.313: 1.5360% ( 22) 00:11:21.643 10474.313 - 10531.549: 1.6817% ( 11) 00:11:21.643 10531.549 - 10588.786: 1.8141% ( 10) 00:11:21.643 10588.786 - 10646.023: 1.8935% ( 6) 00:11:21.643 10646.023 - 10703.259: 1.9995% ( 8) 00:11:21.643 10703.259 - 10760.496: 2.1054% ( 8) 00:11:21.643 10760.496 - 10817.733: 2.3173% ( 16) 00:11:21.643 10817.733 - 10874.969: 2.5821% ( 20) 00:11:21.643 10874.969 - 10932.206: 2.9529% ( 28) 00:11:21.643 10932.206 - 10989.443: 3.3898% ( 33) 00:11:21.643 10989.443 - 11046.679: 4.0651% ( 51) 00:11:21.643 11046.679 - 11103.916: 4.8729% ( 61) 00:11:21.643 11103.916 - 11161.153: 5.5614% ( 52) 00:11:21.643 11161.153 - 11218.390: 6.4486% ( 67) 00:11:21.643 11218.390 - 11275.626: 7.2034% ( 57) 00:11:21.643 11275.626 - 11332.863: 7.9979% ( 60) 00:11:21.643 11332.863 - 11390.100: 8.8321% ( 63) 00:11:21.643 11390.100 - 11447.336: 9.5869% ( 57) 00:11:21.643 11447.336 - 11504.573: 10.2092% ( 47) 00:11:21.643 11504.573 - 11561.810: 10.9640% ( 57) 00:11:21.643 11561.810 - 11619.046: 11.5599% ( 45) 00:11:21.643 11619.046 - 11676.283: 12.2484% ( 52) 00:11:21.643 11676.283 - 11733.520: 13.2548% ( 76) 00:11:21.643 11733.520 - 11790.756: 14.2876% ( 78) 00:11:21.643 11790.756 - 11847.993: 15.0821% ( 60) 00:11:21.644 11847.993 - 11905.230: 15.9560% ( 66) 00:11:21.644 11905.230 - 11962.466: 16.6578% ( 53) 00:11:21.644 11962.466 - 12019.703: 17.5583% ( 68) 00:11:21.644 12019.703 - 12076.940: 18.1674% ( 46) 00:11:21.644 12076.940 - 12134.176: 18.8692% ( 53) 00:11:21.644 12134.176 - 12191.413: 19.3591% ( 37) 00:11:21.644 12191.413 - 12248.650: 20.0212% ( 50) 00:11:21.644 12248.650 - 12305.886: 20.7495% ( 55) 00:11:21.644 12305.886 - 12363.123: 21.3983% ( 49) 00:11:21.644 12363.123 - 12420.360: 21.8485% ( 34) 00:11:21.644 12420.360 - 12477.597: 22.3385% ( 37) 00:11:21.644 12477.597 - 12534.833: 22.8151% ( 36) 00:11:21.644 12534.833 - 12592.070: 23.3183% ( 38) 00:11:21.644 12592.070 - 12649.307: 23.8877% ( 43) 00:11:21.644 12649.307 - 12706.543: 24.4439% ( 42) 00:11:21.644 12706.543 - 12763.780: 25.0794% ( 48) 00:11:21.644 12763.780 - 12821.017: 25.9401% ( 65) 00:11:21.644 12821.017 - 12878.253: 26.8141% ( 66) 00:11:21.644 12878.253 - 12935.490: 27.9926% ( 89) 00:11:21.644 12935.490 - 12992.727: 29.1049% ( 84) 00:11:21.644 12992.727 - 13049.963: 30.7336% ( 123) 00:11:21.644 13049.963 - 13107.200: 32.0445% ( 99) 00:11:21.644 13107.200 - 13164.437: 33.2892% ( 94) 00:11:21.644 13164.437 - 13221.673: 34.3750% ( 82) 00:11:21.644 13221.673 - 13278.910: 35.2357% ( 65) 00:11:21.644 13278.910 - 13336.147: 35.8051% ( 43) 00:11:21.644 13336.147 - 13393.383: 36.4407% ( 48) 00:11:21.644 13393.383 - 13450.620: 37.0630% ( 47) 00:11:21.644 13450.620 - 13507.857: 37.5794% ( 39) 00:11:21.644 13507.857 - 13565.093: 38.1224% ( 41) 00:11:21.644 13565.093 - 13622.330: 38.6255% ( 38) 00:11:21.644 13622.330 - 13679.567: 39.2346% ( 46) 00:11:21.644 13679.567 - 13736.803: 39.8305% ( 45) 00:11:21.644 13736.803 - 13794.040: 40.3337% ( 38) 00:11:21.644 13794.040 - 13851.277: 40.7839% ( 34) 00:11:21.644 13851.277 - 13908.514: 41.2738% ( 37) 00:11:21.644 13908.514 - 13965.750: 41.9094% ( 48) 00:11:21.644 13965.750 - 14022.987: 42.6377% ( 55) 00:11:21.644 14022.987 - 14080.224: 43.7235% ( 82) 00:11:21.644 14080.224 - 14137.460: 44.3856% ( 50) 00:11:21.644 14137.460 - 14194.697: 45.0477% ( 50) 00:11:21.644 14194.697 - 14251.934: 45.7230% ( 51) 00:11:21.644 14251.934 - 14309.170: 46.3983% ( 51) 00:11:21.644 14309.170 - 14366.407: 46.9544% ( 42) 00:11:21.644 14366.407 - 14423.644: 47.5503% ( 45) 00:11:21.644 14423.644 - 14480.880: 48.0667% ( 39) 00:11:21.644 14480.880 - 14538.117: 48.4772% ( 31) 00:11:21.644 14538.117 - 14595.354: 48.8612% ( 29) 00:11:21.644 14595.354 - 14652.590: 49.3909% ( 40) 00:11:21.644 14652.590 - 14767.064: 50.7283% ( 101) 00:11:21.644 14767.064 - 14881.537: 51.9995% ( 96) 00:11:21.644 14881.537 - 14996.010: 53.5487% ( 117) 00:11:21.644 14996.010 - 15110.484: 55.2172% ( 126) 00:11:21.644 15110.484 - 15224.957: 56.9783% ( 133) 00:11:21.644 15224.957 - 15339.431: 58.5143% ( 116) 00:11:21.644 15339.431 - 15453.904: 60.1033% ( 120) 00:11:21.644 15453.904 - 15568.377: 61.7982% ( 128) 00:11:21.644 15568.377 - 15682.851: 63.9698% ( 164) 00:11:21.644 15682.851 - 15797.324: 65.9163% ( 147) 00:11:21.644 15797.324 - 15911.797: 67.4523% ( 116) 00:11:21.644 15911.797 - 16026.271: 68.7368% ( 97) 00:11:21.644 16026.271 - 16140.744: 69.8226% ( 82) 00:11:21.644 16140.744 - 16255.217: 70.5773% ( 57) 00:11:21.644 16255.217 - 16369.691: 71.2659% ( 52) 00:11:21.644 16369.691 - 16484.164: 71.9544% ( 52) 00:11:21.644 16484.164 - 16598.638: 72.9211% ( 73) 00:11:21.644 16598.638 - 16713.111: 73.8215% ( 68) 00:11:21.644 16713.111 - 16827.584: 74.5101% ( 52) 00:11:21.644 16827.584 - 16942.058: 75.2251% ( 54) 00:11:21.644 16942.058 - 17056.531: 76.0858% ( 65) 00:11:21.644 17056.531 - 17171.004: 76.7346% ( 49) 00:11:21.644 17171.004 - 17285.478: 77.4364% ( 53) 00:11:21.644 17285.478 - 17399.951: 78.3236% ( 67) 00:11:21.644 17399.951 - 17514.424: 78.8136% ( 37) 00:11:21.644 17514.424 - 17628.898: 79.2505% ( 33) 00:11:21.644 17628.898 - 17743.371: 79.6345% ( 29) 00:11:21.644 17743.371 - 17857.845: 79.9788% ( 26) 00:11:21.644 17857.845 - 17972.318: 80.2172% ( 18) 00:11:21.644 17972.318 - 18086.791: 80.4555% ( 18) 00:11:21.644 18086.791 - 18201.265: 80.6806% ( 17) 00:11:21.644 18201.265 - 18315.738: 80.8925% ( 16) 00:11:21.644 18315.738 - 18430.211: 81.1043% ( 16) 00:11:21.644 18430.211 - 18544.685: 81.2500% ( 11) 00:11:21.644 18544.685 - 18659.158: 81.4619% ( 16) 00:11:21.644 18659.158 - 18773.631: 81.6340% ( 13) 00:11:21.644 18773.631 - 18888.105: 81.8591% ( 17) 00:11:21.644 18888.105 - 19002.578: 82.1107% ( 19) 00:11:21.644 19002.578 - 19117.052: 82.3093% ( 15) 00:11:21.644 19117.052 - 19231.525: 82.5477% ( 18) 00:11:21.644 19231.525 - 19345.998: 82.7463% ( 15) 00:11:21.644 19345.998 - 19460.472: 82.9317% ( 14) 00:11:21.644 19460.472 - 19574.945: 83.1435% ( 16) 00:11:21.644 19574.945 - 19689.418: 83.3289% ( 14) 00:11:21.644 19689.418 - 19803.892: 83.5540% ( 17) 00:11:21.644 19803.892 - 19918.365: 83.7262% ( 13) 00:11:21.644 19918.365 - 20032.838: 83.9115% ( 14) 00:11:21.644 20032.838 - 20147.312: 84.0572% ( 11) 00:11:21.644 20147.312 - 20261.785: 84.1234% ( 5) 00:11:21.644 20261.785 - 20376.259: 84.2161% ( 7) 00:11:21.644 20376.259 - 20490.732: 84.2823% ( 5) 00:11:21.644 20490.732 - 20605.205: 84.3353% ( 4) 00:11:21.644 20605.205 - 20719.679: 84.3750% ( 3) 00:11:21.644 20719.679 - 20834.152: 84.4015% ( 2) 00:11:21.644 20834.152 - 20948.625: 84.4147% ( 1) 00:11:21.644 20948.625 - 21063.099: 84.4677% ( 4) 00:11:21.644 21063.099 - 21177.572: 84.5207% ( 4) 00:11:21.644 21177.572 - 21292.045: 84.6133% ( 7) 00:11:21.644 21292.045 - 21406.519: 84.7060% ( 7) 00:11:21.644 21406.519 - 21520.992: 84.7325% ( 2) 00:11:21.644 21520.992 - 21635.466: 84.7458% ( 1) 00:11:21.644 23009.146 - 23123.619: 84.7590% ( 1) 00:11:21.644 23123.619 - 23238.093: 84.8385% ( 6) 00:11:21.644 23238.093 - 23352.566: 84.9179% ( 6) 00:11:21.644 23352.566 - 23467.039: 84.9576% ( 3) 00:11:21.644 23467.039 - 23581.513: 85.0238% ( 5) 00:11:21.644 23581.513 - 23695.986: 85.0503% ( 2) 00:11:21.644 23695.986 - 23810.459: 85.0768% ( 2) 00:11:21.644 23810.459 - 23924.933: 85.1033% ( 2) 00:11:21.644 23924.933 - 24039.406: 85.1165% ( 1) 00:11:21.644 24039.406 - 24153.879: 85.1562% ( 3) 00:11:21.644 24153.879 - 24268.353: 85.1960% ( 3) 00:11:21.644 24268.353 - 24382.826: 85.2225% ( 2) 00:11:21.644 24382.826 - 24497.300: 85.2357% ( 1) 00:11:21.644 24497.300 - 24611.773: 85.2622% ( 2) 00:11:21.644 24611.773 - 24726.246: 85.2754% ( 1) 00:11:21.644 24726.246 - 24840.720: 85.3019% ( 2) 00:11:21.644 24840.720 - 24955.193: 85.3549% ( 4) 00:11:21.644 24955.193 - 25069.666: 85.4078% ( 4) 00:11:21.644 25069.666 - 25184.140: 85.5403% ( 10) 00:11:21.644 25184.140 - 25298.613: 85.6462% ( 8) 00:11:21.644 25298.613 - 25413.086: 85.7918% ( 11) 00:11:21.644 25413.086 - 25527.560: 85.9243% ( 10) 00:11:21.644 25527.560 - 25642.033: 86.0567% ( 10) 00:11:21.644 25642.033 - 25756.507: 86.1361% ( 6) 00:11:21.644 25756.507 - 25870.980: 86.2156% ( 6) 00:11:21.644 25870.980 - 25985.453: 86.3215% ( 8) 00:11:21.644 25985.453 - 26099.927: 86.3877% ( 5) 00:11:21.644 26099.927 - 26214.400: 86.4407% ( 4) 00:11:21.644 26214.400 - 26328.873: 86.5334% ( 7) 00:11:21.644 26328.873 - 26443.347: 86.6261% ( 7) 00:11:21.644 26443.347 - 26557.820: 86.7452% ( 9) 00:11:21.644 26557.820 - 26672.293: 86.8247% ( 6) 00:11:21.644 26672.293 - 26786.767: 86.9439% ( 9) 00:11:21.644 26786.767 - 26901.240: 86.9968% ( 4) 00:11:21.644 26901.240 - 27015.714: 87.0365% ( 3) 00:11:21.644 27015.714 - 27130.187: 87.0895% ( 4) 00:11:21.644 27130.187 - 27244.660: 87.1292% ( 3) 00:11:21.644 27244.660 - 27359.134: 87.1822% ( 4) 00:11:21.644 27359.134 - 27473.607: 87.1954% ( 1) 00:11:21.644 27473.607 - 27588.080: 87.2219% ( 2) 00:11:21.644 27588.080 - 27702.554: 87.2352% ( 1) 00:11:21.644 27702.554 - 27817.027: 87.2617% ( 2) 00:11:21.644 27817.027 - 27931.500: 87.2749% ( 1) 00:11:21.644 27931.500 - 28045.974: 87.2881% ( 1) 00:11:21.644 28389.394 - 28503.867: 87.3411% ( 4) 00:11:21.644 28503.867 - 28618.341: 87.3808% ( 3) 00:11:21.644 28618.341 - 28732.814: 87.4206% ( 3) 00:11:21.644 28732.814 - 28847.287: 87.4603% ( 3) 00:11:21.644 28847.287 - 28961.761: 87.5132% ( 4) 00:11:21.644 28961.761 - 29076.234: 87.5530% ( 3) 00:11:21.644 29076.234 - 29190.707: 87.6059% ( 4) 00:11:21.644 29190.707 - 29305.181: 87.6457% ( 3) 00:11:21.644 29305.181 - 29534.128: 87.7781% ( 10) 00:11:21.644 29534.128 - 29763.074: 87.9767% ( 15) 00:11:21.644 29763.074 - 29992.021: 88.2415% ( 20) 00:11:21.644 29992.021 - 30220.968: 88.4269% ( 14) 00:11:21.644 30220.968 - 30449.914: 88.6123% ( 14) 00:11:21.644 30449.914 - 30678.861: 88.7977% ( 14) 00:11:21.644 30678.861 - 30907.808: 89.2611% ( 35) 00:11:21.644 30907.808 - 31136.755: 89.4465% ( 14) 00:11:21.644 31136.755 - 31365.701: 89.6319% ( 14) 00:11:21.644 31365.701 - 31594.648: 89.9364% ( 23) 00:11:21.644 31594.648 - 31823.595: 90.2542% ( 24) 00:11:21.644 31823.595 - 32052.541: 90.5853% ( 25) 00:11:21.644 32052.541 - 32281.488: 90.9958% ( 31) 00:11:21.644 32281.488 - 32510.435: 91.3665% ( 28) 00:11:21.644 32510.435 - 32739.382: 91.7505% ( 29) 00:11:21.644 32739.382 - 32968.328: 92.1610% ( 31) 00:11:21.644 32968.328 - 33197.275: 92.5847% ( 32) 00:11:21.644 33197.275 - 33426.222: 93.1012% ( 39) 00:11:21.644 33426.222 - 33655.169: 93.5514% ( 34) 00:11:21.644 33655.169 - 33884.115: 94.2664% ( 54) 00:11:21.644 33884.115 - 34113.062: 94.7828% ( 39) 00:11:21.644 34113.062 - 34342.009: 95.3390% ( 42) 00:11:21.644 34342.009 - 34570.955: 95.7892% ( 34) 00:11:21.644 34570.955 - 34799.902: 96.1864% ( 30) 00:11:21.644 34799.902 - 35028.849: 96.9147% ( 55) 00:11:21.644 35028.849 - 35257.796: 97.2325% ( 24) 00:11:21.644 35257.796 - 35486.742: 97.5900% ( 27) 00:11:21.644 35486.742 - 35715.689: 97.8814% ( 22) 00:11:21.644 35715.689 - 35944.636: 98.1197% ( 18) 00:11:21.645 35944.636 - 36173.583: 98.3051% ( 14) 00:11:21.645 36173.583 - 36402.529: 98.4905% ( 14) 00:11:21.645 36402.529 - 36631.476: 98.6494% ( 12) 00:11:21.645 36631.476 - 36860.423: 98.7288% ( 6) 00:11:21.645 36860.423 - 37089.369: 98.8347% ( 8) 00:11:21.645 37089.369 - 37318.316: 98.9274% ( 7) 00:11:21.645 37318.316 - 37547.263: 99.0201% ( 7) 00:11:21.645 37547.263 - 37776.210: 99.0996% ( 6) 00:11:21.645 37776.210 - 38005.156: 99.1393% ( 3) 00:11:21.645 38005.156 - 38234.103: 99.1790% ( 3) 00:11:21.645 38234.103 - 38463.050: 99.2585% ( 6) 00:11:21.645 38463.050 - 38691.997: 99.3379% ( 6) 00:11:21.645 38691.997 - 38920.943: 99.4306% ( 7) 00:11:21.645 38920.943 - 39149.890: 99.5101% ( 6) 00:11:21.645 39149.890 - 39378.837: 99.6028% ( 7) 00:11:21.645 39378.837 - 39607.783: 99.6954% ( 7) 00:11:21.645 39607.783 - 39836.730: 99.7749% ( 6) 00:11:21.645 39836.730 - 40065.677: 99.8543% ( 6) 00:11:21.645 40065.677 - 40294.624: 99.9338% ( 6) 00:11:21.645 40294.624 - 40523.570: 100.0000% ( 5) 00:11:21.645 00:11:21.645 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:21.645 ============================================================================== 00:11:21.645 Range in us Cumulative IO count 00:11:21.645 9558.526 - 9615.762: 0.0530% ( 4) 00:11:21.645 9615.762 - 9672.999: 0.1192% ( 5) 00:11:21.645 9672.999 - 9730.236: 0.1986% ( 6) 00:11:21.645 9730.236 - 9787.472: 0.3310% ( 10) 00:11:21.645 9787.472 - 9844.709: 0.5429% ( 16) 00:11:21.645 9844.709 - 9901.946: 0.6224% ( 6) 00:11:21.645 9901.946 - 9959.183: 0.6753% ( 4) 00:11:21.645 9959.183 - 10016.419: 0.7150% ( 3) 00:11:21.645 10016.419 - 10073.656: 0.7812% ( 5) 00:11:21.645 10073.656 - 10130.893: 0.8210% ( 3) 00:11:21.645 10130.893 - 10188.129: 0.8607% ( 3) 00:11:21.645 10188.129 - 10245.366: 0.8739% ( 1) 00:11:21.645 10245.366 - 10302.603: 0.9666% ( 7) 00:11:21.645 10302.603 - 10359.839: 1.0858% ( 9) 00:11:21.645 10359.839 - 10417.076: 1.1785% ( 7) 00:11:21.645 10417.076 - 10474.313: 1.4433% ( 20) 00:11:21.645 10474.313 - 10531.549: 1.5757% ( 10) 00:11:21.645 10531.549 - 10588.786: 1.6949% ( 9) 00:11:21.645 10588.786 - 10646.023: 1.8141% ( 9) 00:11:21.645 10646.023 - 10703.259: 1.9068% ( 7) 00:11:21.645 10703.259 - 10760.496: 1.9862% ( 6) 00:11:21.645 10760.496 - 10817.733: 2.1981% ( 16) 00:11:21.645 10817.733 - 10874.969: 2.2908% ( 7) 00:11:21.645 10874.969 - 10932.206: 2.3835% ( 7) 00:11:21.645 10932.206 - 10989.443: 2.5556% ( 13) 00:11:21.645 10989.443 - 11046.679: 2.7675% ( 16) 00:11:21.645 11046.679 - 11103.916: 3.0588% ( 22) 00:11:21.645 11103.916 - 11161.153: 3.4031% ( 26) 00:11:21.645 11161.153 - 11218.390: 4.2373% ( 63) 00:11:21.645 11218.390 - 11275.626: 5.0847% ( 64) 00:11:21.645 11275.626 - 11332.863: 5.9322% ( 64) 00:11:21.645 11332.863 - 11390.100: 6.9783% ( 79) 00:11:21.645 11390.100 - 11447.336: 7.8390% ( 65) 00:11:21.645 11447.336 - 11504.573: 8.7526% ( 69) 00:11:21.645 11504.573 - 11561.810: 9.7325% ( 74) 00:11:21.645 11561.810 - 11619.046: 10.7521% ( 77) 00:11:21.645 11619.046 - 11676.283: 11.8379% ( 82) 00:11:21.645 11676.283 - 11733.520: 13.0959% ( 95) 00:11:21.645 11733.520 - 11790.756: 13.9566% ( 65) 00:11:21.645 11790.756 - 11847.993: 14.9894% ( 78) 00:11:21.645 11847.993 - 11905.230: 15.8766% ( 67) 00:11:21.645 11905.230 - 11962.466: 16.6446% ( 58) 00:11:21.645 11962.466 - 12019.703: 17.3332% ( 52) 00:11:21.645 12019.703 - 12076.940: 18.2336% ( 68) 00:11:21.645 12076.940 - 12134.176: 19.0943% ( 65) 00:11:21.645 12134.176 - 12191.413: 19.9285% ( 63) 00:11:21.645 12191.413 - 12248.650: 20.4052% ( 36) 00:11:21.645 12248.650 - 12305.886: 21.0408% ( 48) 00:11:21.645 12305.886 - 12363.123: 21.6234% ( 44) 00:11:21.645 12363.123 - 12420.360: 22.4974% ( 66) 00:11:21.645 12420.360 - 12477.597: 23.2256% ( 55) 00:11:21.645 12477.597 - 12534.833: 24.0069% ( 59) 00:11:21.645 12534.833 - 12592.070: 25.0265% ( 77) 00:11:21.645 12592.070 - 12649.307: 26.2977% ( 96) 00:11:21.645 12649.307 - 12706.543: 27.4497% ( 87) 00:11:21.645 12706.543 - 12763.780: 28.3766% ( 70) 00:11:21.645 12763.780 - 12821.017: 29.2240% ( 64) 00:11:21.645 12821.017 - 12878.253: 30.0053% ( 59) 00:11:21.645 12878.253 - 12935.490: 30.7468% ( 56) 00:11:21.645 12935.490 - 12992.727: 31.3957% ( 49) 00:11:21.645 12992.727 - 13049.963: 31.8591% ( 35) 00:11:21.645 13049.963 - 13107.200: 32.2431% ( 29) 00:11:21.645 13107.200 - 13164.437: 32.6668% ( 32) 00:11:21.645 13164.437 - 13221.673: 33.1171% ( 34) 00:11:21.645 13221.673 - 13278.910: 33.5408% ( 32) 00:11:21.645 13278.910 - 13336.147: 34.0307% ( 37) 00:11:21.645 13336.147 - 13393.383: 34.6398% ( 46) 00:11:21.645 13393.383 - 13450.620: 35.2754% ( 48) 00:11:21.645 13450.620 - 13507.857: 36.1626% ( 67) 00:11:21.645 13507.857 - 13565.093: 37.2484% ( 82) 00:11:21.645 13565.093 - 13622.330: 38.2283% ( 74) 00:11:21.645 13622.330 - 13679.567: 39.0757% ( 64) 00:11:21.645 13679.567 - 13736.803: 39.7643% ( 52) 00:11:21.645 13736.803 - 13794.040: 40.3469% ( 44) 00:11:21.645 13794.040 - 13851.277: 40.9560% ( 46) 00:11:21.645 13851.277 - 13908.514: 41.5122% ( 42) 00:11:21.645 13908.514 - 13965.750: 42.2669% ( 57) 00:11:21.645 13965.750 - 14022.987: 42.8496% ( 44) 00:11:21.645 14022.987 - 14080.224: 43.2601% ( 31) 00:11:21.645 14080.224 - 14137.460: 43.7500% ( 37) 00:11:21.645 14137.460 - 14194.697: 44.3061% ( 42) 00:11:21.645 14194.697 - 14251.934: 45.0344% ( 55) 00:11:21.645 14251.934 - 14309.170: 45.6171% ( 44) 00:11:21.645 14309.170 - 14366.407: 46.0938% ( 36) 00:11:21.645 14366.407 - 14423.644: 46.6367% ( 41) 00:11:21.645 14423.644 - 14480.880: 47.3914% ( 57) 00:11:21.645 14480.880 - 14538.117: 48.0005% ( 46) 00:11:21.645 14538.117 - 14595.354: 48.6494% ( 49) 00:11:21.645 14595.354 - 14652.590: 49.4571% ( 61) 00:11:21.645 14652.590 - 14767.064: 50.6488% ( 90) 00:11:21.645 14767.064 - 14881.537: 52.2246% ( 119) 00:11:21.645 14881.537 - 14996.010: 53.9989% ( 134) 00:11:21.645 14996.010 - 15110.484: 55.6806% ( 127) 00:11:21.645 15110.484 - 15224.957: 57.3358% ( 125) 00:11:21.645 15224.957 - 15339.431: 58.8983% ( 118) 00:11:21.645 15339.431 - 15453.904: 60.6859% ( 135) 00:11:21.645 15453.904 - 15568.377: 62.2087% ( 115) 00:11:21.645 15568.377 - 15682.851: 63.3872% ( 89) 00:11:21.645 15682.851 - 15797.324: 64.6319% ( 94) 00:11:21.645 15797.324 - 15911.797: 65.7839% ( 87) 00:11:21.645 15911.797 - 16026.271: 67.2669% ( 112) 00:11:21.645 16026.271 - 16140.744: 68.6573% ( 105) 00:11:21.645 16140.744 - 16255.217: 69.7696% ( 84) 00:11:21.645 16255.217 - 16369.691: 71.0408% ( 96) 00:11:21.645 16369.691 - 16484.164: 72.1133% ( 81) 00:11:21.645 16484.164 - 16598.638: 73.1992% ( 82) 00:11:21.645 16598.638 - 16713.111: 74.0069% ( 61) 00:11:21.645 16713.111 - 16827.584: 74.8676% ( 65) 00:11:21.645 16827.584 - 16942.058: 75.4502% ( 44) 00:11:21.645 16942.058 - 17056.531: 76.0328% ( 44) 00:11:21.645 17056.531 - 17171.004: 76.8671% ( 63) 00:11:21.645 17171.004 - 17285.478: 77.6748% ( 61) 00:11:21.645 17285.478 - 17399.951: 78.1118% ( 33) 00:11:21.645 17399.951 - 17514.424: 78.7341% ( 47) 00:11:21.645 17514.424 - 17628.898: 79.2903% ( 42) 00:11:21.645 17628.898 - 17743.371: 79.8729% ( 44) 00:11:21.645 17743.371 - 17857.845: 80.2436% ( 28) 00:11:21.645 17857.845 - 17972.318: 80.5350% ( 22) 00:11:21.645 17972.318 - 18086.791: 80.8263% ( 22) 00:11:21.645 18086.791 - 18201.265: 80.9454% ( 9) 00:11:21.645 18201.265 - 18315.738: 81.2103% ( 20) 00:11:21.645 18315.738 - 18430.211: 81.5016% ( 22) 00:11:21.645 18430.211 - 18544.685: 81.6605% ( 12) 00:11:21.645 18544.685 - 18659.158: 81.8591% ( 15) 00:11:21.645 18659.158 - 18773.631: 82.0312% ( 13) 00:11:21.645 18773.631 - 18888.105: 82.1504% ( 9) 00:11:21.645 18888.105 - 19002.578: 82.3093% ( 12) 00:11:21.645 19002.578 - 19117.052: 82.4947% ( 14) 00:11:21.645 19117.052 - 19231.525: 82.7066% ( 16) 00:11:21.645 19231.525 - 19345.998: 82.9184% ( 16) 00:11:21.645 19345.998 - 19460.472: 83.0773% ( 12) 00:11:21.645 19460.472 - 19574.945: 83.2230% ( 11) 00:11:21.645 19574.945 - 19689.418: 83.3554% ( 10) 00:11:21.645 19689.418 - 19803.892: 83.4216% ( 5) 00:11:21.645 19803.892 - 19918.365: 83.4746% ( 4) 00:11:21.645 19918.365 - 20032.838: 83.5275% ( 4) 00:11:21.645 20032.838 - 20147.312: 83.5805% ( 4) 00:11:21.645 20147.312 - 20261.785: 83.6467% ( 5) 00:11:21.645 20261.785 - 20376.259: 83.7129% ( 5) 00:11:21.645 20376.259 - 20490.732: 83.7526% ( 3) 00:11:21.645 20490.732 - 20605.205: 83.7924% ( 3) 00:11:21.645 20605.205 - 20719.679: 83.8321% ( 3) 00:11:21.645 20719.679 - 20834.152: 83.8718% ( 3) 00:11:21.645 20834.152 - 20948.625: 83.8983% ( 2) 00:11:21.645 21292.045 - 21406.519: 83.9115% ( 1) 00:11:21.645 21406.519 - 21520.992: 83.9248% ( 1) 00:11:21.645 21520.992 - 21635.466: 83.9645% ( 3) 00:11:21.645 21635.466 - 21749.939: 83.9910% ( 2) 00:11:21.645 21749.939 - 21864.412: 84.0175% ( 2) 00:11:21.645 21864.412 - 21978.886: 84.0837% ( 5) 00:11:21.645 21978.886 - 22093.359: 84.2029% ( 9) 00:11:21.645 22093.359 - 22207.832: 84.2956% ( 7) 00:11:21.645 22207.832 - 22322.306: 84.4015% ( 8) 00:11:21.645 22322.306 - 22436.779: 84.4942% ( 7) 00:11:21.645 22436.779 - 22551.252: 84.5207% ( 2) 00:11:21.645 22551.252 - 22665.726: 84.5736% ( 4) 00:11:21.645 22665.726 - 22780.199: 84.6398% ( 5) 00:11:21.645 22780.199 - 22894.672: 84.7458% ( 8) 00:11:21.645 22894.672 - 23009.146: 84.8385% ( 7) 00:11:21.645 23009.146 - 23123.619: 84.9179% ( 6) 00:11:21.645 23123.619 - 23238.093: 84.9576% ( 3) 00:11:21.645 23238.093 - 23352.566: 84.9974% ( 3) 00:11:21.645 23352.566 - 23467.039: 85.0503% ( 4) 00:11:21.645 23467.039 - 23581.513: 85.1033% ( 4) 00:11:21.646 23581.513 - 23695.986: 85.2225% ( 9) 00:11:21.646 23695.986 - 23810.459: 85.3681% ( 11) 00:11:21.646 23810.459 - 23924.933: 85.4608% ( 7) 00:11:21.646 23924.933 - 24039.406: 85.6065% ( 11) 00:11:21.646 24039.406 - 24153.879: 85.7521% ( 11) 00:11:21.646 24153.879 - 24268.353: 85.8183% ( 5) 00:11:21.646 24268.353 - 24382.826: 85.8713% ( 4) 00:11:21.646 24382.826 - 24497.300: 85.9110% ( 3) 00:11:21.646 24497.300 - 24611.773: 85.9640% ( 4) 00:11:21.646 24611.773 - 24726.246: 86.0037% ( 3) 00:11:21.646 24726.246 - 24840.720: 86.0434% ( 3) 00:11:21.646 24840.720 - 24955.193: 86.0832% ( 3) 00:11:21.646 24955.193 - 25069.666: 86.1229% ( 3) 00:11:21.646 25069.666 - 25184.140: 86.1626% ( 3) 00:11:21.646 25184.140 - 25298.613: 86.1891% ( 2) 00:11:21.646 25298.613 - 25413.086: 86.2156% ( 2) 00:11:21.646 25413.086 - 25527.560: 86.2288% ( 1) 00:11:21.646 25527.560 - 25642.033: 86.2553% ( 2) 00:11:21.646 25642.033 - 25756.507: 86.2685% ( 1) 00:11:21.646 25756.507 - 25870.980: 86.2818% ( 1) 00:11:21.646 25870.980 - 25985.453: 86.3083% ( 2) 00:11:21.646 25985.453 - 26099.927: 86.3215% ( 1) 00:11:21.646 26099.927 - 26214.400: 86.3347% ( 1) 00:11:21.646 26214.400 - 26328.873: 86.3612% ( 2) 00:11:21.646 26328.873 - 26443.347: 86.3877% ( 2) 00:11:21.646 26443.347 - 26557.820: 86.4010% ( 1) 00:11:21.646 26557.820 - 26672.293: 86.4539% ( 4) 00:11:21.646 26672.293 - 26786.767: 86.5069% ( 4) 00:11:21.646 26786.767 - 26901.240: 86.5201% ( 1) 00:11:21.646 26901.240 - 27015.714: 86.5599% ( 3) 00:11:21.646 27015.714 - 27130.187: 86.5863% ( 2) 00:11:21.646 27130.187 - 27244.660: 86.6261% ( 3) 00:11:21.646 27244.660 - 27359.134: 86.7717% ( 11) 00:11:21.646 27359.134 - 27473.607: 86.9439% ( 13) 00:11:21.646 27473.607 - 27588.080: 87.0101% ( 5) 00:11:21.646 27588.080 - 27702.554: 87.0895% ( 6) 00:11:21.646 27702.554 - 27817.027: 87.1822% ( 7) 00:11:21.646 27817.027 - 27931.500: 87.2617% ( 6) 00:11:21.646 27931.500 - 28045.974: 87.3411% ( 6) 00:11:21.646 28045.974 - 28160.447: 87.4603% ( 9) 00:11:21.646 28160.447 - 28274.921: 87.5927% ( 10) 00:11:21.646 28274.921 - 28389.394: 87.6986% ( 8) 00:11:21.646 28389.394 - 28503.867: 87.8178% ( 9) 00:11:21.646 28503.867 - 28618.341: 87.9370% ( 9) 00:11:21.646 28618.341 - 28732.814: 88.0826% ( 11) 00:11:21.646 28732.814 - 28847.287: 88.2415% ( 12) 00:11:21.646 28847.287 - 28961.761: 88.3739% ( 10) 00:11:21.646 28961.761 - 29076.234: 88.4666% ( 7) 00:11:21.646 29076.234 - 29190.707: 88.5064% ( 3) 00:11:21.646 29190.707 - 29305.181: 88.5328% ( 2) 00:11:21.646 29305.181 - 29534.128: 88.5990% ( 5) 00:11:21.646 29534.128 - 29763.074: 88.6653% ( 5) 00:11:21.646 29763.074 - 29992.021: 88.7315% ( 5) 00:11:21.646 29992.021 - 30220.968: 88.7977% ( 5) 00:11:21.646 30220.968 - 30449.914: 88.9036% ( 8) 00:11:21.646 30449.914 - 30678.861: 89.0360% ( 10) 00:11:21.646 30678.861 - 30907.808: 89.3141% ( 21) 00:11:21.646 30907.808 - 31136.755: 89.6849% ( 28) 00:11:21.646 31136.755 - 31365.701: 89.9894% ( 23) 00:11:21.646 31365.701 - 31594.648: 90.3204% ( 25) 00:11:21.646 31594.648 - 31823.595: 90.6118% ( 22) 00:11:21.646 31823.595 - 32052.541: 90.9163% ( 23) 00:11:21.646 32052.541 - 32281.488: 91.3400% ( 32) 00:11:21.646 32281.488 - 32510.435: 91.7903% ( 34) 00:11:21.646 32510.435 - 32739.382: 92.1875% ( 30) 00:11:21.646 32739.382 - 32968.328: 92.5847% ( 30) 00:11:21.646 32968.328 - 33197.275: 93.0747% ( 37) 00:11:21.646 33197.275 - 33426.222: 93.5117% ( 33) 00:11:21.646 33426.222 - 33655.169: 94.0281% ( 39) 00:11:21.646 33655.169 - 33884.115: 94.7299% ( 53) 00:11:21.646 33884.115 - 34113.062: 95.1801% ( 34) 00:11:21.646 34113.062 - 34342.009: 95.5376% ( 27) 00:11:21.646 34342.009 - 34570.955: 95.9084% ( 28) 00:11:21.646 34570.955 - 34799.902: 96.3718% ( 35) 00:11:21.646 34799.902 - 35028.849: 96.7029% ( 25) 00:11:21.646 35028.849 - 35257.796: 97.0471% ( 26) 00:11:21.646 35257.796 - 35486.742: 97.4311% ( 29) 00:11:21.646 35486.742 - 35715.689: 98.0138% ( 44) 00:11:21.646 35715.689 - 35944.636: 98.2918% ( 21) 00:11:21.646 35944.636 - 36173.583: 98.4772% ( 14) 00:11:21.646 36173.583 - 36402.529: 98.6494% ( 13) 00:11:21.646 36402.529 - 36631.476: 98.7950% ( 11) 00:11:21.646 36631.476 - 36860.423: 98.9672% ( 13) 00:11:21.646 36860.423 - 37089.369: 99.1128% ( 11) 00:11:21.646 37089.369 - 37318.316: 99.3114% ( 15) 00:11:21.646 37318.316 - 37547.263: 99.4968% ( 14) 00:11:21.646 37547.263 - 37776.210: 99.6425% ( 11) 00:11:21.646 37776.210 - 38005.156: 99.7484% ( 8) 00:11:21.646 38005.156 - 38234.103: 99.8543% ( 8) 00:11:21.646 38234.103 - 38463.050: 99.9206% ( 5) 00:11:21.646 38463.050 - 38691.997: 100.0000% ( 6) 00:11:21.646 00:11:21.646 ************************************ 00:11:21.646 END TEST nvme_perf 00:11:21.646 ************************************ 00:11:21.646 19:25:47 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:21.646 00:11:21.646 real 0m2.645s 00:11:21.646 user 0m2.250s 00:11:21.646 sys 0m0.292s 00:11:21.646 19:25:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:21.646 19:25:47 -- common/autotest_common.sh@10 -- # set +x 00:11:21.646 19:25:47 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:21.646 19:25:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:21.646 19:25:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.646 19:25:47 -- common/autotest_common.sh@10 -- # set +x 00:11:21.905 ************************************ 00:11:21.905 START TEST nvme_hello_world 00:11:21.905 ************************************ 00:11:21.905 19:25:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:22.165 Initializing NVMe Controllers 00:11:22.165 Attached to 0000:00:10.0 00:11:22.165 Namespace ID: 1 size: 6GB 00:11:22.165 Attached to 0000:00:11.0 00:11:22.165 Namespace ID: 1 size: 5GB 00:11:22.165 Attached to 0000:00:13.0 00:11:22.165 Namespace ID: 1 size: 1GB 00:11:22.165 Attached to 0000:00:12.0 00:11:22.165 Namespace ID: 1 size: 4GB 00:11:22.165 Namespace ID: 2 size: 4GB 00:11:22.165 Namespace ID: 3 size: 4GB 00:11:22.165 Initialization complete. 00:11:22.165 INFO: using host memory buffer for IO 00:11:22.165 Hello world! 00:11:22.165 INFO: using host memory buffer for IO 00:11:22.165 Hello world! 00:11:22.165 INFO: using host memory buffer for IO 00:11:22.165 Hello world! 00:11:22.165 INFO: using host memory buffer for IO 00:11:22.165 Hello world! 00:11:22.165 INFO: using host memory buffer for IO 00:11:22.165 Hello world! 00:11:22.165 INFO: using host memory buffer for IO 00:11:22.165 Hello world! 00:11:22.165 00:11:22.165 real 0m0.320s 00:11:22.165 user 0m0.111s 00:11:22.165 sys 0m0.161s 00:11:22.165 19:25:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:22.165 19:25:47 -- common/autotest_common.sh@10 -- # set +x 00:11:22.165 ************************************ 00:11:22.165 END TEST nvme_hello_world 00:11:22.165 ************************************ 00:11:22.165 19:25:47 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:22.165 19:25:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.165 19:25:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.165 19:25:47 -- common/autotest_common.sh@10 -- # set +x 00:11:22.165 ************************************ 00:11:22.165 START TEST nvme_sgl 00:11:22.165 ************************************ 00:11:22.165 19:25:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:22.426 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:11:22.426 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:11:22.426 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:11:22.426 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:11:22.426 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:11:22.426 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:11:22.426 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:11:22.426 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:11:22.426 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:11:22.426 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:11:22.426 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:11:22.426 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:11:22.426 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:11:22.426 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:11:22.426 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:11:22.426 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:11:22.426 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:11:22.426 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:11:22.426 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:11:22.426 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:11:22.426 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:11:22.426 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:11:22.426 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:11:22.426 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:11:22.426 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:11:22.426 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:11:22.426 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:11:22.426 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:11:22.426 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:11:22.426 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:11:22.426 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:11:22.426 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:11:22.426 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:11:22.426 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:11:22.426 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:11:22.426 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:11:22.685 NVMe Readv/Writev Request test 00:11:22.685 Attached to 0000:00:10.0 00:11:22.685 Attached to 0000:00:11.0 00:11:22.685 Attached to 0000:00:13.0 00:11:22.685 Attached to 0000:00:12.0 00:11:22.685 0000:00:10.0: build_io_request_2 test passed 00:11:22.685 0000:00:10.0: build_io_request_4 test passed 00:11:22.685 0000:00:10.0: build_io_request_5 test passed 00:11:22.685 0000:00:10.0: build_io_request_6 test passed 00:11:22.685 0000:00:10.0: build_io_request_7 test passed 00:11:22.685 0000:00:10.0: build_io_request_10 test passed 00:11:22.685 0000:00:11.0: build_io_request_2 test passed 00:11:22.685 0000:00:11.0: build_io_request_4 test passed 00:11:22.685 0000:00:11.0: build_io_request_5 test passed 00:11:22.685 0000:00:11.0: build_io_request_6 test passed 00:11:22.685 0000:00:11.0: build_io_request_7 test passed 00:11:22.685 0000:00:11.0: build_io_request_10 test passed 00:11:22.685 Cleaning up... 00:11:22.685 00:11:22.685 real 0m0.360s 00:11:22.685 user 0m0.174s 00:11:22.685 sys 0m0.142s 00:11:22.685 19:25:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:22.685 19:25:48 -- common/autotest_common.sh@10 -- # set +x 00:11:22.685 ************************************ 00:11:22.685 END TEST nvme_sgl 00:11:22.685 ************************************ 00:11:22.685 19:25:48 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:22.685 19:25:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.685 19:25:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.685 19:25:48 -- common/autotest_common.sh@10 -- # set +x 00:11:22.685 ************************************ 00:11:22.685 START TEST nvme_e2edp 00:11:22.685 ************************************ 00:11:22.685 19:25:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:22.944 NVMe Write/Read with End-to-End data protection test 00:11:22.944 Attached to 0000:00:10.0 00:11:22.944 Attached to 0000:00:11.0 00:11:22.944 Attached to 0000:00:13.0 00:11:22.944 Attached to 0000:00:12.0 00:11:22.944 Cleaning up... 00:11:22.944 ************************************ 00:11:22.944 END TEST nvme_e2edp 00:11:22.944 ************************************ 00:11:22.944 00:11:22.944 real 0m0.282s 00:11:22.944 user 0m0.097s 00:11:22.944 sys 0m0.133s 00:11:22.944 19:25:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:22.944 19:25:48 -- common/autotest_common.sh@10 -- # set +x 00:11:22.944 19:25:48 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:22.944 19:25:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:22.944 19:25:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:22.944 19:25:48 -- common/autotest_common.sh@10 -- # set +x 00:11:23.203 ************************************ 00:11:23.203 START TEST nvme_reserve 00:11:23.203 ************************************ 00:11:23.203 19:25:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:23.463 ===================================================== 00:11:23.463 NVMe Controller at PCI bus 0, device 16, function 0 00:11:23.463 ===================================================== 00:11:23.463 Reservations: Not Supported 00:11:23.463 ===================================================== 00:11:23.463 NVMe Controller at PCI bus 0, device 17, function 0 00:11:23.463 ===================================================== 00:11:23.463 Reservations: Not Supported 00:11:23.463 ===================================================== 00:11:23.463 NVMe Controller at PCI bus 0, device 19, function 0 00:11:23.463 ===================================================== 00:11:23.463 Reservations: Not Supported 00:11:23.463 ===================================================== 00:11:23.463 NVMe Controller at PCI bus 0, device 18, function 0 00:11:23.463 ===================================================== 00:11:23.463 Reservations: Not Supported 00:11:23.463 Reservation test passed 00:11:23.463 00:11:23.463 real 0m0.303s 00:11:23.463 user 0m0.097s 00:11:23.463 sys 0m0.153s 00:11:23.463 19:25:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:23.463 19:25:48 -- common/autotest_common.sh@10 -- # set +x 00:11:23.463 ************************************ 00:11:23.463 END TEST nvme_reserve 00:11:23.463 ************************************ 00:11:23.463 19:25:48 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:23.463 19:25:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:23.463 19:25:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.463 19:25:48 -- common/autotest_common.sh@10 -- # set +x 00:11:23.463 ************************************ 00:11:23.463 START TEST nvme_err_injection 00:11:23.463 ************************************ 00:11:23.463 19:25:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:23.720 NVMe Error Injection test 00:11:23.720 Attached to 0000:00:10.0 00:11:23.720 Attached to 0000:00:11.0 00:11:23.720 Attached to 0000:00:13.0 00:11:23.720 Attached to 0000:00:12.0 00:11:23.720 0000:00:10.0: get features failed as expected 00:11:23.720 0000:00:11.0: get features failed as expected 00:11:23.720 0000:00:13.0: get features failed as expected 00:11:23.720 0000:00:12.0: get features failed as expected 00:11:23.720 0000:00:10.0: get features successfully as expected 00:11:23.720 0000:00:11.0: get features successfully as expected 00:11:23.720 0000:00:13.0: get features successfully as expected 00:11:23.720 0000:00:12.0: get features successfully as expected 00:11:23.720 0000:00:10.0: read failed as expected 00:11:23.720 0000:00:11.0: read failed as expected 00:11:23.720 0000:00:12.0: read failed as expected 00:11:23.720 0000:00:13.0: read failed as expected 00:11:23.720 0000:00:10.0: read successfully as expected 00:11:23.720 0000:00:11.0: read successfully as expected 00:11:23.720 0000:00:13.0: read successfully as expected 00:11:23.720 0000:00:12.0: read successfully as expected 00:11:23.720 Cleaning up... 00:11:23.720 ************************************ 00:11:23.720 END TEST nvme_err_injection 00:11:23.720 ************************************ 00:11:23.720 00:11:23.720 real 0m0.308s 00:11:23.720 user 0m0.103s 00:11:23.720 sys 0m0.145s 00:11:23.720 19:25:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:23.720 19:25:49 -- common/autotest_common.sh@10 -- # set +x 00:11:23.977 19:25:49 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:23.977 19:25:49 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:11:23.977 19:25:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:23.977 19:25:49 -- common/autotest_common.sh@10 -- # set +x 00:11:23.977 ************************************ 00:11:23.977 START TEST nvme_overhead 00:11:23.977 ************************************ 00:11:23.977 19:25:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:25.353 Initializing NVMe Controllers 00:11:25.353 Attached to 0000:00:10.0 00:11:25.353 Attached to 0000:00:11.0 00:11:25.353 Attached to 0000:00:13.0 00:11:25.353 Attached to 0000:00:12.0 00:11:25.353 Initialization complete. Launching workers. 00:11:25.353 submit (in ns) avg, min, max = 12870.1, 9992.1, 60283.0 00:11:25.353 complete (in ns) avg, min, max = 8257.1, 6609.6, 50844.5 00:11:25.353 00:11:25.353 Submit histogram 00:11:25.353 ================ 00:11:25.353 Range in us Cumulative Count 00:11:25.353 9.949 - 10.005: 0.0076% ( 1) 00:11:25.353 10.900 - 10.955: 0.0151% ( 1) 00:11:25.353 10.955 - 11.011: 0.0454% ( 4) 00:11:25.353 11.011 - 11.067: 0.1514% ( 14) 00:11:25.353 11.067 - 11.123: 0.3103% ( 21) 00:11:25.353 11.123 - 11.179: 0.5373% ( 30) 00:11:25.353 11.179 - 11.235: 0.7947% ( 34) 00:11:25.353 11.235 - 11.291: 1.3093% ( 68) 00:11:25.353 11.291 - 11.347: 2.1040% ( 105) 00:11:25.353 11.347 - 11.403: 3.1030% ( 132) 00:11:25.353 11.403 - 11.459: 4.2761% ( 155) 00:11:25.353 11.459 - 11.514: 6.0622% ( 236) 00:11:25.353 11.514 - 11.570: 7.9997% ( 256) 00:11:25.353 11.570 - 11.626: 10.0885% ( 276) 00:11:25.353 11.626 - 11.682: 12.7375% ( 350) 00:11:25.353 11.682 - 11.738: 15.5756% ( 375) 00:11:25.353 11.738 - 11.794: 18.7543% ( 420) 00:11:25.353 11.794 - 11.850: 21.8573% ( 410) 00:11:25.353 11.850 - 11.906: 24.9376% ( 407) 00:11:25.353 11.906 - 11.962: 28.0633% ( 413) 00:11:25.353 11.962 - 12.017: 31.6582% ( 475) 00:11:25.353 12.017 - 12.073: 35.0867% ( 453) 00:11:25.353 12.073 - 12.129: 38.6211% ( 467) 00:11:25.353 12.129 - 12.185: 42.0495% ( 453) 00:11:25.353 12.185 - 12.241: 45.9850% ( 520) 00:11:25.353 12.241 - 12.297: 49.7994% ( 504) 00:11:25.353 12.297 - 12.353: 53.1446% ( 442) 00:11:25.353 12.353 - 12.409: 56.6563% ( 464) 00:11:25.353 12.409 - 12.465: 59.8426% ( 421) 00:11:25.353 12.465 - 12.521: 62.7942% ( 390) 00:11:25.353 12.521 - 12.576: 65.3447% ( 337) 00:11:25.353 12.576 - 12.632: 67.5471% ( 291) 00:11:25.353 12.632 - 12.688: 69.4316% ( 249) 00:11:25.353 12.688 - 12.744: 71.3010% ( 247) 00:11:25.353 12.744 - 12.800: 72.8979% ( 211) 00:11:25.354 12.800 - 12.856: 74.3207% ( 188) 00:11:25.354 12.856 - 12.912: 75.5847% ( 167) 00:11:25.354 12.912 - 12.968: 76.8561% ( 168) 00:11:25.354 12.968 - 13.024: 77.8854% ( 136) 00:11:25.354 13.024 - 13.079: 78.9677% ( 143) 00:11:25.354 13.079 - 13.135: 79.8078% ( 111) 00:11:25.354 13.135 - 13.191: 80.4965% ( 91) 00:11:25.354 13.191 - 13.247: 81.1776% ( 90) 00:11:25.354 13.247 - 13.303: 81.7982% ( 82) 00:11:25.354 13.303 - 13.359: 82.3659% ( 75) 00:11:25.354 13.359 - 13.415: 82.8881% ( 69) 00:11:25.354 13.415 - 13.471: 83.3724% ( 64) 00:11:25.354 13.471 - 13.527: 83.6979% ( 43) 00:11:25.354 13.527 - 13.583: 83.9779% ( 37) 00:11:25.354 13.583 - 13.638: 84.2352% ( 34) 00:11:25.354 13.638 - 13.694: 84.5985% ( 48) 00:11:25.354 13.694 - 13.750: 84.8407% ( 32) 00:11:25.354 13.750 - 13.806: 85.1131% ( 36) 00:11:25.354 13.806 - 13.862: 85.3326% ( 29) 00:11:25.354 13.862 - 13.918: 85.4991% ( 22) 00:11:25.354 13.918 - 13.974: 85.6505% ( 20) 00:11:25.354 13.974 - 14.030: 85.7489% ( 13) 00:11:25.354 14.030 - 14.086: 85.8397% ( 12) 00:11:25.354 14.086 - 14.141: 85.9911% ( 20) 00:11:25.354 14.141 - 14.197: 86.1424% ( 20) 00:11:25.354 14.197 - 14.253: 86.2333% ( 12) 00:11:25.354 14.253 - 14.309: 86.3846% ( 20) 00:11:25.354 14.309 - 14.421: 86.7025% ( 42) 00:11:25.354 14.421 - 14.533: 87.2550% ( 73) 00:11:25.354 14.533 - 14.645: 88.1632% ( 120) 00:11:25.354 14.645 - 14.756: 89.2833% ( 148) 00:11:25.354 14.756 - 14.868: 90.3277% ( 138) 00:11:25.354 14.868 - 14.980: 91.1905% ( 114) 00:11:25.354 14.980 - 15.092: 91.8262% ( 84) 00:11:25.354 15.092 - 15.203: 92.3257% ( 66) 00:11:25.354 15.203 - 15.315: 92.8404% ( 68) 00:11:25.354 15.315 - 15.427: 93.2566% ( 55) 00:11:25.354 15.427 - 15.539: 93.5896% ( 44) 00:11:25.354 15.539 - 15.651: 93.9151% ( 43) 00:11:25.354 15.651 - 15.762: 94.1648% ( 33) 00:11:25.354 15.762 - 15.874: 94.3465% ( 24) 00:11:25.354 15.874 - 15.986: 94.6265% ( 37) 00:11:25.354 15.986 - 16.098: 94.8233% ( 26) 00:11:25.354 16.098 - 16.210: 95.0428% ( 29) 00:11:25.354 16.210 - 16.321: 95.2698% ( 30) 00:11:25.354 16.321 - 16.433: 95.5196% ( 33) 00:11:25.354 16.433 - 16.545: 95.6861% ( 22) 00:11:25.354 16.545 - 16.657: 95.8677% ( 24) 00:11:25.354 16.657 - 16.769: 96.0948% ( 30) 00:11:25.354 16.769 - 16.880: 96.2613% ( 22) 00:11:25.354 16.880 - 16.992: 96.3294% ( 9) 00:11:25.354 16.992 - 17.104: 96.4505% ( 16) 00:11:25.354 17.104 - 17.216: 96.5564% ( 14) 00:11:25.354 17.216 - 17.328: 96.6624% ( 14) 00:11:25.354 17.328 - 17.439: 96.7986% ( 18) 00:11:25.354 17.439 - 17.551: 96.8970% ( 13) 00:11:25.354 17.551 - 17.663: 96.9727% ( 10) 00:11:25.354 17.663 - 17.775: 97.0786% ( 14) 00:11:25.354 17.775 - 17.886: 97.1467% ( 9) 00:11:25.354 17.886 - 17.998: 97.2376% ( 12) 00:11:25.354 17.998 - 18.110: 97.3133% ( 10) 00:11:25.354 18.110 - 18.222: 97.3814% ( 9) 00:11:25.354 18.222 - 18.334: 97.4722% ( 12) 00:11:25.354 18.334 - 18.445: 97.5479% ( 10) 00:11:25.354 18.445 - 18.557: 97.6387% ( 12) 00:11:25.354 18.557 - 18.669: 97.7522% ( 15) 00:11:25.354 18.669 - 18.781: 97.7749% ( 3) 00:11:25.354 18.781 - 18.893: 97.8279% ( 7) 00:11:25.354 18.893 - 19.004: 97.8506% ( 3) 00:11:25.354 19.004 - 19.116: 97.9263% ( 10) 00:11:25.354 19.116 - 19.228: 97.9944% ( 9) 00:11:25.354 19.228 - 19.340: 98.0701% ( 10) 00:11:25.354 19.340 - 19.452: 98.1231% ( 7) 00:11:25.354 19.452 - 19.563: 98.1987% ( 10) 00:11:25.354 19.563 - 19.675: 98.2669% ( 9) 00:11:25.354 19.675 - 19.787: 98.3425% ( 10) 00:11:25.354 19.787 - 19.899: 98.4258% ( 11) 00:11:25.354 19.899 - 20.010: 98.4712% ( 6) 00:11:25.354 20.010 - 20.122: 98.5317% ( 8) 00:11:25.354 20.122 - 20.234: 98.5999% ( 9) 00:11:25.354 20.234 - 20.346: 98.6680% ( 9) 00:11:25.354 20.346 - 20.458: 98.6983% ( 4) 00:11:25.354 20.458 - 20.569: 98.7285% ( 4) 00:11:25.354 20.569 - 20.681: 98.7664% ( 5) 00:11:25.354 20.681 - 20.793: 98.8118% ( 6) 00:11:25.354 20.793 - 20.905: 98.8572% ( 6) 00:11:25.354 20.905 - 21.017: 98.9026% ( 6) 00:11:25.354 21.017 - 21.128: 98.9329% ( 4) 00:11:25.354 21.128 - 21.240: 98.9783% ( 6) 00:11:25.354 21.240 - 21.352: 98.9934% ( 2) 00:11:25.354 21.352 - 21.464: 99.0086% ( 2) 00:11:25.354 21.464 - 21.576: 99.0161% ( 1) 00:11:25.354 21.576 - 21.687: 99.0540% ( 5) 00:11:25.354 21.687 - 21.799: 99.0994% ( 6) 00:11:25.354 21.799 - 21.911: 99.1145% ( 2) 00:11:25.354 21.911 - 22.023: 99.1296% ( 2) 00:11:25.354 22.023 - 22.134: 99.1523% ( 3) 00:11:25.354 22.134 - 22.246: 99.1902% ( 5) 00:11:25.354 22.246 - 22.358: 99.2129% ( 3) 00:11:25.354 22.358 - 22.470: 99.2659% ( 7) 00:11:25.354 22.470 - 22.582: 99.2810% ( 2) 00:11:25.354 22.582 - 22.693: 99.2886% ( 1) 00:11:25.354 22.805 - 22.917: 99.3037% ( 2) 00:11:25.354 23.141 - 23.252: 99.3189% ( 2) 00:11:25.354 23.252 - 23.364: 99.3416% ( 3) 00:11:25.354 23.364 - 23.476: 99.3567% ( 2) 00:11:25.354 23.476 - 23.588: 99.3643% ( 1) 00:11:25.354 23.588 - 23.700: 99.3794% ( 2) 00:11:25.354 23.700 - 23.811: 99.3945% ( 2) 00:11:25.354 23.923 - 24.035: 99.4021% ( 1) 00:11:25.354 24.035 - 24.147: 99.4248% ( 3) 00:11:25.354 24.147 - 24.259: 99.4324% ( 1) 00:11:25.354 24.370 - 24.482: 99.4551% ( 3) 00:11:25.354 24.594 - 24.706: 99.4627% ( 1) 00:11:25.354 24.817 - 24.929: 99.4702% ( 1) 00:11:25.354 24.929 - 25.041: 99.4778% ( 1) 00:11:25.354 25.041 - 25.153: 99.4854% ( 1) 00:11:25.354 25.265 - 25.376: 99.4929% ( 1) 00:11:25.354 25.600 - 25.712: 99.5081% ( 2) 00:11:25.354 25.824 - 25.935: 99.5156% ( 1) 00:11:25.354 26.383 - 26.494: 99.5383% ( 3) 00:11:25.354 26.606 - 26.718: 99.5610% ( 3) 00:11:25.354 26.718 - 26.830: 99.5686% ( 1) 00:11:25.354 26.830 - 26.941: 99.5762% ( 1) 00:11:25.354 27.053 - 27.165: 99.5837% ( 1) 00:11:25.354 27.724 - 27.836: 99.5913% ( 1) 00:11:25.354 27.948 - 28.059: 99.6064% ( 2) 00:11:25.354 28.171 - 28.283: 99.6140% ( 1) 00:11:25.354 28.283 - 28.395: 99.6216% ( 1) 00:11:25.354 28.395 - 28.507: 99.6519% ( 4) 00:11:25.354 28.507 - 28.618: 99.6594% ( 1) 00:11:25.354 28.618 - 28.842: 99.6670% ( 1) 00:11:25.354 28.842 - 29.066: 99.7351% ( 9) 00:11:25.354 29.066 - 29.289: 99.7578% ( 3) 00:11:25.354 29.289 - 29.513: 99.7654% ( 1) 00:11:25.354 29.513 - 29.736: 99.8108% ( 6) 00:11:25.354 29.736 - 29.960: 99.8335% ( 3) 00:11:25.354 29.960 - 30.183: 99.8486% ( 2) 00:11:25.354 30.407 - 30.631: 99.8638% ( 2) 00:11:25.354 30.631 - 30.854: 99.8713% ( 1) 00:11:25.354 30.854 - 31.078: 99.8865% ( 2) 00:11:25.354 31.078 - 31.301: 99.8940% ( 1) 00:11:25.354 31.301 - 31.525: 99.9016% ( 1) 00:11:25.354 31.525 - 31.748: 99.9167% ( 2) 00:11:25.354 31.748 - 31.972: 99.9319% ( 2) 00:11:25.354 31.972 - 32.196: 99.9395% ( 1) 00:11:25.354 32.643 - 32.866: 99.9546% ( 2) 00:11:25.354 33.537 - 33.761: 99.9622% ( 1) 00:11:25.354 33.761 - 33.984: 99.9697% ( 1) 00:11:25.354 35.326 - 35.549: 99.9773% ( 1) 00:11:25.354 46.281 - 46.505: 99.9849% ( 1) 00:11:25.354 48.741 - 48.964: 99.9924% ( 1) 00:11:25.354 59.920 - 60.367: 100.0000% ( 1) 00:11:25.354 00:11:25.354 Complete histogram 00:11:25.354 ================== 00:11:25.354 Range in us Cumulative Count 00:11:25.354 6.596 - 6.624: 0.0076% ( 1) 00:11:25.354 6.624 - 6.652: 0.0151% ( 1) 00:11:25.354 6.652 - 6.679: 0.0681% ( 7) 00:11:25.354 6.679 - 6.707: 0.1816% ( 15) 00:11:25.354 6.707 - 6.735: 0.3557% ( 23) 00:11:25.354 6.735 - 6.763: 0.7417% ( 51) 00:11:25.354 6.763 - 6.791: 1.3472% ( 80) 00:11:25.354 6.791 - 6.819: 1.9829% ( 84) 00:11:25.354 6.819 - 6.847: 2.8078% ( 109) 00:11:25.354 6.847 - 6.875: 4.0188% ( 160) 00:11:25.354 6.875 - 6.903: 5.2221% ( 159) 00:11:25.354 6.903 - 6.931: 6.2666% ( 138) 00:11:25.354 6.931 - 6.959: 7.4548% ( 157) 00:11:25.354 6.959 - 6.987: 8.8171% ( 180) 00:11:25.354 6.987 - 7.015: 9.9977% ( 156) 00:11:25.354 7.015 - 7.043: 11.2995% ( 172) 00:11:25.354 7.043 - 7.071: 12.6315% ( 176) 00:11:25.354 7.071 - 7.099: 14.0241% ( 184) 00:11:25.354 7.099 - 7.127: 15.3637% ( 177) 00:11:25.354 7.127 - 7.155: 16.4913% ( 149) 00:11:25.354 7.155 - 7.210: 18.9208% ( 321) 00:11:25.355 7.210 - 7.266: 21.3048% ( 315) 00:11:25.355 7.266 - 7.322: 23.3861% ( 275) 00:11:25.355 7.322 - 7.378: 25.7020% ( 306) 00:11:25.355 7.378 - 7.434: 28.2298% ( 334) 00:11:25.355 7.434 - 7.490: 30.8862% ( 351) 00:11:25.355 7.490 - 7.546: 34.3677% ( 460) 00:11:25.355 7.546 - 7.602: 38.2124% ( 508) 00:11:25.355 7.602 - 7.658: 42.4960% ( 566) 00:11:25.355 7.658 - 7.714: 46.5526% ( 536) 00:11:25.355 7.714 - 7.769: 50.2989% ( 495) 00:11:25.355 7.769 - 7.825: 53.3868% ( 408) 00:11:25.355 7.825 - 7.881: 55.9146% ( 334) 00:11:25.355 7.881 - 7.937: 58.2154% ( 304) 00:11:25.355 7.937 - 7.993: 60.4783% ( 299) 00:11:25.355 7.993 - 8.049: 62.4536% ( 261) 00:11:25.355 8.049 - 8.105: 64.0354% ( 209) 00:11:25.355 8.105 - 8.161: 65.6172% ( 209) 00:11:25.355 8.161 - 8.217: 67.0703% ( 192) 00:11:25.355 8.217 - 8.272: 68.4175% ( 178) 00:11:25.355 8.272 - 8.328: 69.6587% ( 164) 00:11:25.355 8.328 - 8.384: 70.8242% ( 154) 00:11:25.355 8.384 - 8.440: 72.0730% ( 165) 00:11:25.355 8.440 - 8.496: 73.1855% ( 147) 00:11:25.355 8.496 - 8.552: 74.2905% ( 146) 00:11:25.355 8.552 - 8.608: 75.5014% ( 160) 00:11:25.355 8.608 - 8.664: 76.5610% ( 140) 00:11:25.355 8.664 - 8.720: 77.5827% ( 135) 00:11:25.355 8.720 - 8.776: 78.5741% ( 131) 00:11:25.355 8.776 - 8.831: 79.5277% ( 126) 00:11:25.355 8.831 - 8.887: 80.3073% ( 103) 00:11:25.355 8.887 - 8.943: 81.0944% ( 104) 00:11:25.355 8.943 - 8.999: 81.9269% ( 110) 00:11:25.355 8.999 - 9.055: 82.6459% ( 95) 00:11:25.355 9.055 - 9.111: 83.4784% ( 110) 00:11:25.355 9.111 - 9.167: 84.0006% ( 69) 00:11:25.355 9.167 - 9.223: 84.4850% ( 64) 00:11:25.355 9.223 - 9.279: 85.0829% ( 79) 00:11:25.355 9.279 - 9.334: 85.5218% ( 58) 00:11:25.355 9.334 - 9.390: 85.8851% ( 48) 00:11:25.355 9.390 - 9.446: 86.2635% ( 50) 00:11:25.355 9.446 - 9.502: 86.7403% ( 63) 00:11:25.355 9.502 - 9.558: 87.4139% ( 89) 00:11:25.355 9.558 - 9.614: 88.1556% ( 98) 00:11:25.355 9.614 - 9.670: 88.8292% ( 89) 00:11:25.355 9.670 - 9.726: 89.5330% ( 93) 00:11:25.355 9.726 - 9.782: 90.1385% ( 80) 00:11:25.355 9.782 - 9.838: 90.8424% ( 93) 00:11:25.355 9.838 - 9.893: 91.4478% ( 80) 00:11:25.355 9.893 - 9.949: 91.8565% ( 54) 00:11:25.355 9.949 - 10.005: 92.1214% ( 35) 00:11:25.355 10.005 - 10.061: 92.4014% ( 37) 00:11:25.355 10.061 - 10.117: 92.6739% ( 36) 00:11:25.355 10.117 - 10.173: 92.9615% ( 38) 00:11:25.355 10.173 - 10.229: 93.1583% ( 26) 00:11:25.355 10.229 - 10.285: 93.3323% ( 23) 00:11:25.355 10.285 - 10.341: 93.4837% ( 20) 00:11:25.355 10.341 - 10.397: 93.6048% ( 16) 00:11:25.355 10.397 - 10.452: 93.6805% ( 10) 00:11:25.355 10.452 - 10.508: 93.8167% ( 18) 00:11:25.355 10.508 - 10.564: 93.9681% ( 20) 00:11:25.355 10.564 - 10.620: 94.1875% ( 29) 00:11:25.355 10.620 - 10.676: 94.3616% ( 23) 00:11:25.355 10.676 - 10.732: 94.5357% ( 23) 00:11:25.355 10.732 - 10.788: 94.7930% ( 34) 00:11:25.355 10.788 - 10.844: 95.0806% ( 38) 00:11:25.355 10.844 - 10.900: 95.2395% ( 21) 00:11:25.355 10.900 - 10.955: 95.4136% ( 23) 00:11:25.355 10.955 - 11.011: 95.5498% ( 18) 00:11:25.355 11.011 - 11.067: 95.7390% ( 25) 00:11:25.355 11.067 - 11.123: 95.9585% ( 29) 00:11:25.355 11.123 - 11.179: 96.1402% ( 24) 00:11:25.355 11.179 - 11.235: 96.3142% ( 23) 00:11:25.355 11.235 - 11.291: 96.4051% ( 12) 00:11:25.355 11.291 - 11.347: 96.4807% ( 10) 00:11:25.355 11.347 - 11.403: 96.6094% ( 17) 00:11:25.355 11.403 - 11.459: 96.7456% ( 18) 00:11:25.355 11.459 - 11.514: 96.8289% ( 11) 00:11:25.355 11.514 - 11.570: 96.9802% ( 20) 00:11:25.355 11.570 - 11.626: 97.1089% ( 17) 00:11:25.355 11.626 - 11.682: 97.2073% ( 13) 00:11:25.355 11.682 - 11.738: 97.2905% ( 11) 00:11:25.355 11.738 - 11.794: 97.3587% ( 9) 00:11:25.355 11.794 - 11.850: 97.4268% ( 9) 00:11:25.355 11.850 - 11.906: 97.4570% ( 4) 00:11:25.355 11.906 - 11.962: 97.5025% ( 6) 00:11:25.355 11.962 - 12.017: 97.5933% ( 12) 00:11:25.355 12.017 - 12.073: 97.6387% ( 6) 00:11:25.355 12.073 - 12.129: 97.6765% ( 5) 00:11:25.355 12.129 - 12.185: 97.7068% ( 4) 00:11:25.355 12.185 - 12.241: 97.7446% ( 5) 00:11:25.355 12.241 - 12.297: 97.7674% ( 3) 00:11:25.355 12.297 - 12.353: 97.8279% ( 8) 00:11:25.355 12.353 - 12.409: 97.8733% ( 6) 00:11:25.355 12.409 - 12.465: 97.9036% ( 4) 00:11:25.355 12.465 - 12.521: 97.9339% ( 4) 00:11:25.355 12.521 - 12.576: 97.9566% ( 3) 00:11:25.355 12.576 - 12.632: 97.9944% ( 5) 00:11:25.355 12.632 - 12.688: 98.0171% ( 3) 00:11:25.355 12.688 - 12.744: 98.0398% ( 3) 00:11:25.355 12.744 - 12.800: 98.0701% ( 4) 00:11:25.355 12.800 - 12.856: 98.0777% ( 1) 00:11:25.355 12.856 - 12.912: 98.1155% ( 5) 00:11:25.355 12.968 - 13.024: 98.1306% ( 2) 00:11:25.355 13.079 - 13.135: 98.1382% ( 1) 00:11:25.355 13.135 - 13.191: 98.1533% ( 2) 00:11:25.355 13.191 - 13.247: 98.1685% ( 2) 00:11:25.355 13.247 - 13.303: 98.1912% ( 3) 00:11:25.355 13.471 - 13.527: 98.2139% ( 3) 00:11:25.355 13.527 - 13.583: 98.2517% ( 5) 00:11:25.355 13.583 - 13.638: 98.2744% ( 3) 00:11:25.355 13.638 - 13.694: 98.2971% ( 3) 00:11:25.355 13.694 - 13.750: 98.3123% ( 2) 00:11:25.355 13.750 - 13.806: 98.3350% ( 3) 00:11:25.355 13.806 - 13.862: 98.3501% ( 2) 00:11:25.355 13.862 - 13.918: 98.3728% ( 3) 00:11:25.355 13.918 - 13.974: 98.3880% ( 2) 00:11:25.355 14.030 - 14.086: 98.4182% ( 4) 00:11:25.355 14.086 - 14.141: 98.4334% ( 2) 00:11:25.355 14.141 - 14.197: 98.4409% ( 1) 00:11:25.355 14.197 - 14.253: 98.4561% ( 2) 00:11:25.355 14.253 - 14.309: 98.4788% ( 3) 00:11:25.355 14.309 - 14.421: 98.5090% ( 4) 00:11:25.355 14.421 - 14.533: 98.5545% ( 6) 00:11:25.355 14.533 - 14.645: 98.5772% ( 3) 00:11:25.355 14.645 - 14.756: 98.6226% ( 6) 00:11:25.355 14.756 - 14.868: 98.6907% ( 9) 00:11:25.355 14.868 - 14.980: 98.7588% ( 9) 00:11:25.355 14.980 - 15.092: 98.8269% ( 9) 00:11:25.355 15.092 - 15.203: 98.8875% ( 8) 00:11:25.355 15.203 - 15.315: 98.9177% ( 4) 00:11:25.355 15.315 - 15.427: 98.9631% ( 6) 00:11:25.355 15.427 - 15.539: 99.0010% ( 5) 00:11:25.355 15.539 - 15.651: 99.0464% ( 6) 00:11:25.355 15.651 - 15.762: 99.0842% ( 5) 00:11:25.355 15.762 - 15.874: 99.1069% ( 3) 00:11:25.355 15.874 - 15.986: 99.1523% ( 6) 00:11:25.355 15.986 - 16.098: 99.1751% ( 3) 00:11:25.355 16.210 - 16.321: 99.1902% ( 2) 00:11:25.355 16.321 - 16.433: 99.2053% ( 2) 00:11:25.355 16.433 - 16.545: 99.2356% ( 4) 00:11:25.355 16.545 - 16.657: 99.2659% ( 4) 00:11:25.355 16.657 - 16.769: 99.2886% ( 3) 00:11:25.355 16.769 - 16.880: 99.3113% ( 3) 00:11:25.355 16.880 - 16.992: 99.3416% ( 4) 00:11:25.355 17.104 - 17.216: 99.3643% ( 3) 00:11:25.355 17.216 - 17.328: 99.3870% ( 3) 00:11:25.355 17.328 - 17.439: 99.3945% ( 1) 00:11:25.355 17.439 - 17.551: 99.4021% ( 1) 00:11:25.355 17.551 - 17.663: 99.4248% ( 3) 00:11:25.355 17.663 - 17.775: 99.4324% ( 1) 00:11:25.355 17.886 - 17.998: 99.4551% ( 3) 00:11:25.355 17.998 - 18.110: 99.4929% ( 5) 00:11:25.355 18.445 - 18.557: 99.5005% ( 1) 00:11:25.355 18.669 - 18.781: 99.5156% ( 2) 00:11:25.355 19.004 - 19.116: 99.5232% ( 1) 00:11:25.355 19.228 - 19.340: 99.5308% ( 1) 00:11:25.355 19.452 - 19.563: 99.5459% ( 2) 00:11:25.355 19.787 - 19.899: 99.5610% ( 2) 00:11:25.355 20.234 - 20.346: 99.5686% ( 1) 00:11:25.355 20.458 - 20.569: 99.5989% ( 4) 00:11:25.355 20.569 - 20.681: 99.6064% ( 1) 00:11:25.355 21.017 - 21.128: 99.6140% ( 1) 00:11:25.355 21.352 - 21.464: 99.6216% ( 1) 00:11:25.355 21.464 - 21.576: 99.6292% ( 1) 00:11:25.355 21.799 - 21.911: 99.6519% ( 3) 00:11:25.355 22.693 - 22.805: 99.6594% ( 1) 00:11:25.355 22.805 - 22.917: 99.6746% ( 2) 00:11:25.355 23.700 - 23.811: 99.6821% ( 1) 00:11:25.355 23.811 - 23.923: 99.6973% ( 2) 00:11:25.355 23.923 - 24.035: 99.7124% ( 2) 00:11:25.355 24.035 - 24.147: 99.7427% ( 4) 00:11:25.355 24.147 - 24.259: 99.7502% ( 1) 00:11:25.355 24.259 - 24.370: 99.7578% ( 1) 00:11:25.355 24.370 - 24.482: 99.7654% ( 1) 00:11:25.355 24.594 - 24.706: 99.7730% ( 1) 00:11:25.355 24.706 - 24.817: 99.7805% ( 1) 00:11:25.355 24.929 - 25.041: 99.7881% ( 1) 00:11:25.355 25.041 - 25.153: 99.7957% ( 1) 00:11:25.355 25.153 - 25.265: 99.8032% ( 1) 00:11:25.355 25.265 - 25.376: 99.8184% ( 2) 00:11:25.355 25.376 - 25.488: 99.8259% ( 1) 00:11:25.355 25.488 - 25.600: 99.8411% ( 2) 00:11:25.355 25.824 - 25.935: 99.8486% ( 1) 00:11:25.355 26.047 - 26.159: 99.8638% ( 2) 00:11:25.356 26.159 - 26.271: 99.8713% ( 1) 00:11:25.356 26.383 - 26.494: 99.8940% ( 3) 00:11:25.356 26.494 - 26.606: 99.9016% ( 1) 00:11:25.356 26.830 - 26.941: 99.9167% ( 2) 00:11:25.356 27.724 - 27.836: 99.9243% ( 1) 00:11:25.356 28.059 - 28.171: 99.9319% ( 1) 00:11:25.356 28.507 - 28.618: 99.9395% ( 1) 00:11:25.356 29.289 - 29.513: 99.9470% ( 1) 00:11:25.356 34.879 - 35.102: 99.9546% ( 1) 00:11:25.356 35.549 - 35.773: 99.9622% ( 1) 00:11:25.356 36.220 - 36.444: 99.9697% ( 1) 00:11:25.356 42.928 - 43.151: 99.9773% ( 1) 00:11:25.356 45.834 - 46.058: 99.9849% ( 1) 00:11:25.356 49.859 - 50.082: 99.9924% ( 1) 00:11:25.356 50.753 - 50.976: 100.0000% ( 1) 00:11:25.356 00:11:25.356 00:11:25.356 real 0m1.284s 00:11:25.356 user 0m1.083s 00:11:25.356 sys 0m0.147s 00:11:25.356 ************************************ 00:11:25.356 END TEST nvme_overhead 00:11:25.356 ************************************ 00:11:25.356 19:25:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:25.356 19:25:50 -- common/autotest_common.sh@10 -- # set +x 00:11:25.356 19:25:50 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:25.356 19:25:50 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:11:25.356 19:25:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:25.356 19:25:50 -- common/autotest_common.sh@10 -- # set +x 00:11:25.356 ************************************ 00:11:25.356 START TEST nvme_arbitration 00:11:25.356 ************************************ 00:11:25.356 19:25:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:29.540 Initializing NVMe Controllers 00:11:29.540 Attached to 0000:00:10.0 00:11:29.540 Attached to 0000:00:11.0 00:11:29.540 Attached to 0000:00:13.0 00:11:29.540 Attached to 0000:00:12.0 00:11:29.540 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:29.540 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:29.540 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:29.540 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:29.540 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:29.540 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:29.540 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:29.540 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:29.540 Initialization complete. Launching workers. 00:11:29.540 Starting thread on core 1 with urgent priority queue 00:11:29.540 Starting thread on core 2 with urgent priority queue 00:11:29.540 Starting thread on core 3 with urgent priority queue 00:11:29.540 Starting thread on core 0 with urgent priority queue 00:11:29.540 QEMU NVMe Ctrl (12340 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:11:29.540 QEMU NVMe Ctrl (12342 ) core 0: 469.33 IO/s 213.07 secs/100000 ios 00:11:29.540 QEMU NVMe Ctrl (12341 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:11:29.541 QEMU NVMe Ctrl (12342 ) core 1: 469.33 IO/s 213.07 secs/100000 ios 00:11:29.541 QEMU NVMe Ctrl (12343 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:11:29.541 QEMU NVMe Ctrl (12342 ) core 3: 469.33 IO/s 213.07 secs/100000 ios 00:11:29.541 ======================================================== 00:11:29.541 00:11:29.541 00:11:29.541 real 0m3.441s 00:11:29.541 user 0m9.431s 00:11:29.541 sys 0m0.160s 00:11:29.541 19:25:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:29.541 19:25:54 -- common/autotest_common.sh@10 -- # set +x 00:11:29.541 ************************************ 00:11:29.541 END TEST nvme_arbitration 00:11:29.541 ************************************ 00:11:29.541 19:25:54 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:29.541 19:25:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:11:29.541 19:25:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:29.541 19:25:54 -- common/autotest_common.sh@10 -- # set +x 00:11:29.541 ************************************ 00:11:29.541 START TEST nvme_single_aen 00:11:29.541 ************************************ 00:11:29.541 19:25:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:29.541 Asynchronous Event Request test 00:11:29.541 Attached to 0000:00:10.0 00:11:29.541 Attached to 0000:00:11.0 00:11:29.541 Attached to 0000:00:13.0 00:11:29.541 Attached to 0000:00:12.0 00:11:29.541 Reset controller to setup AER completions for this process 00:11:29.541 Registering asynchronous event callbacks... 00:11:29.541 Getting orig temperature thresholds of all controllers 00:11:29.541 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:29.541 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:29.541 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:29.541 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:29.541 Setting all controllers temperature threshold low to trigger AER 00:11:29.541 Waiting for all controllers temperature threshold to be set lower 00:11:29.541 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:29.541 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:29.541 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:29.541 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:29.541 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:29.541 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:29.541 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:29.541 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:29.541 Waiting for all controllers to trigger AER and reset threshold 00:11:29.541 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.541 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.541 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.541 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:29.541 Cleaning up... 00:11:29.541 00:11:29.541 real 0m0.279s 00:11:29.541 user 0m0.089s 00:11:29.541 sys 0m0.138s 00:11:29.541 19:25:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:29.541 19:25:54 -- common/autotest_common.sh@10 -- # set +x 00:11:29.541 ************************************ 00:11:29.541 END TEST nvme_single_aen 00:11:29.541 ************************************ 00:11:29.541 19:25:54 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:29.541 19:25:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:29.541 19:25:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:29.541 19:25:54 -- common/autotest_common.sh@10 -- # set +x 00:11:29.541 ************************************ 00:11:29.541 START TEST nvme_doorbell_aers 00:11:29.541 ************************************ 00:11:29.541 19:25:54 -- common/autotest_common.sh@1111 -- # nvme_doorbell_aers 00:11:29.541 19:25:54 -- nvme/nvme.sh@70 -- # bdfs=() 00:11:29.541 19:25:54 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:29.541 19:25:54 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:29.541 19:25:54 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:29.541 19:25:54 -- common/autotest_common.sh@1499 -- # bdfs=() 00:11:29.541 19:25:54 -- common/autotest_common.sh@1499 -- # local bdfs 00:11:29.541 19:25:54 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:29.541 19:25:54 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:29.541 19:25:54 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:11:29.541 19:25:54 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:11:29.541 19:25:54 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:29.541 19:25:54 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:29.541 19:25:54 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:29.541 [2024-04-24 19:25:55.200369] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:11:39.513 Executing: test_write_invalid_db 00:11:39.513 Waiting for AER completion... 00:11:39.513 Failure: test_write_invalid_db 00:11:39.513 00:11:39.513 Executing: test_invalid_db_write_overflow_sq 00:11:39.513 Waiting for AER completion... 00:11:39.513 Failure: test_invalid_db_write_overflow_sq 00:11:39.513 00:11:39.513 Executing: test_invalid_db_write_overflow_cq 00:11:39.513 Waiting for AER completion... 00:11:39.514 Failure: test_invalid_db_write_overflow_cq 00:11:39.514 00:11:39.514 19:26:04 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:39.514 19:26:04 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:39.772 [2024-04-24 19:26:05.262728] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:11:49.756 Executing: test_write_invalid_db 00:11:49.756 Waiting for AER completion... 00:11:49.756 Failure: test_write_invalid_db 00:11:49.756 00:11:49.756 Executing: test_invalid_db_write_overflow_sq 00:11:49.756 Waiting for AER completion... 00:11:49.756 Failure: test_invalid_db_write_overflow_sq 00:11:49.756 00:11:49.756 Executing: test_invalid_db_write_overflow_cq 00:11:49.756 Waiting for AER completion... 00:11:49.756 Failure: test_invalid_db_write_overflow_cq 00:11:49.756 00:11:49.756 19:26:15 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:49.756 19:26:15 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:49.756 [2024-04-24 19:26:15.315955] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:11:59.836 Executing: test_write_invalid_db 00:11:59.836 Waiting for AER completion... 00:11:59.836 Failure: test_write_invalid_db 00:11:59.836 00:11:59.836 Executing: test_invalid_db_write_overflow_sq 00:11:59.836 Waiting for AER completion... 00:11:59.836 Failure: test_invalid_db_write_overflow_sq 00:11:59.836 00:11:59.836 Executing: test_invalid_db_write_overflow_cq 00:11:59.836 Waiting for AER completion... 00:11:59.836 Failure: test_invalid_db_write_overflow_cq 00:11:59.836 00:11:59.836 19:26:25 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:59.836 19:26:25 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:59.836 [2024-04-24 19:26:25.348082] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:12:09.798 Executing: test_write_invalid_db 00:12:09.798 Waiting for AER completion... 00:12:09.798 Failure: test_write_invalid_db 00:12:09.798 00:12:09.798 Executing: test_invalid_db_write_overflow_sq 00:12:09.798 Waiting for AER completion... 00:12:09.798 Failure: test_invalid_db_write_overflow_sq 00:12:09.798 00:12:09.798 Executing: test_invalid_db_write_overflow_cq 00:12:09.798 Waiting for AER completion... 00:12:09.798 Failure: test_invalid_db_write_overflow_cq 00:12:09.798 00:12:09.798 00:12:09.798 real 0m40.247s 00:12:09.798 user 0m35.273s 00:12:09.798 sys 0m4.567s 00:12:09.798 19:26:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:09.798 19:26:35 -- common/autotest_common.sh@10 -- # set +x 00:12:09.798 ************************************ 00:12:09.798 END TEST nvme_doorbell_aers 00:12:09.798 ************************************ 00:12:09.798 19:26:35 -- nvme/nvme.sh@97 -- # uname 00:12:09.798 19:26:35 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:09.798 19:26:35 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:09.798 19:26:35 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:12:09.798 19:26:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.798 19:26:35 -- common/autotest_common.sh@10 -- # set +x 00:12:09.798 ************************************ 00:12:09.798 START TEST nvme_multi_aen 00:12:09.798 ************************************ 00:12:09.798 19:26:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:09.798 [2024-04-24 19:26:35.469213] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:12:09.798 [2024-04-24 19:26:35.469312] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:12:09.798 [2024-04-24 19:26:35.469338] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:12:09.798 [2024-04-24 19:26:35.470642] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:12:09.798 [2024-04-24 19:26:35.470687] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:12:09.798 [2024-04-24 19:26:35.470702] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:12:09.798 [2024-04-24 19:26:35.471589] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:12:09.798 [2024-04-24 19:26:35.471623] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:12:09.798 [2024-04-24 19:26:35.471647] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:12:09.798 [2024-04-24 19:26:35.472550] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:12:09.798 [2024-04-24 19:26:35.472586] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:12:09.798 [2024-04-24 19:26:35.472601] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70358) is not found. Dropping the request. 00:12:10.057 Child process pid: 70880 00:12:10.057 [Child] Asynchronous Event Request test 00:12:10.057 [Child] Attached to 0000:00:10.0 00:12:10.057 [Child] Attached to 0000:00:11.0 00:12:10.057 [Child] Attached to 0000:00:13.0 00:12:10.057 [Child] Attached to 0000:00:12.0 00:12:10.057 [Child] Registering asynchronous event callbacks... 00:12:10.057 [Child] Getting orig temperature thresholds of all controllers 00:12:10.057 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:10.057 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:10.057 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:10.057 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:10.057 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:10.057 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:10.057 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:10.057 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:10.057 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:10.057 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:10.057 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:10.057 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:10.057 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:10.057 [Child] Cleaning up... 00:12:10.315 Asynchronous Event Request test 00:12:10.315 Attached to 0000:00:10.0 00:12:10.315 Attached to 0000:00:11.0 00:12:10.315 Attached to 0000:00:13.0 00:12:10.315 Attached to 0000:00:12.0 00:12:10.315 Reset controller to setup AER completions for this process 00:12:10.315 Registering asynchronous event callbacks... 00:12:10.315 Getting orig temperature thresholds of all controllers 00:12:10.315 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:10.315 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:10.315 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:10.315 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:10.315 Setting all controllers temperature threshold low to trigger AER 00:12:10.315 Waiting for all controllers temperature threshold to be set lower 00:12:10.315 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:10.315 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:10.315 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:10.315 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:10.315 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:10.315 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:10.315 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:10.315 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:10.315 Waiting for all controllers to trigger AER and reset threshold 00:12:10.315 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:10.316 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:10.316 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:10.316 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:10.316 Cleaning up... 00:12:10.316 00:12:10.316 real 0m0.529s 00:12:10.316 user 0m0.177s 00:12:10.316 sys 0m0.249s 00:12:10.316 19:26:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:10.316 19:26:35 -- common/autotest_common.sh@10 -- # set +x 00:12:10.316 ************************************ 00:12:10.316 END TEST nvme_multi_aen 00:12:10.316 ************************************ 00:12:10.316 19:26:35 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:10.316 19:26:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:10.316 19:26:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:10.316 19:26:35 -- common/autotest_common.sh@10 -- # set +x 00:12:10.316 ************************************ 00:12:10.316 START TEST nvme_startup 00:12:10.316 ************************************ 00:12:10.316 19:26:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:10.574 Initializing NVMe Controllers 00:12:10.574 Attached to 0000:00:10.0 00:12:10.574 Attached to 0000:00:11.0 00:12:10.574 Attached to 0000:00:13.0 00:12:10.574 Attached to 0000:00:12.0 00:12:10.574 Initialization complete. 00:12:10.574 Time used:158129.312 (us). 00:12:10.574 00:12:10.574 real 0m0.255s 00:12:10.574 user 0m0.078s 00:12:10.574 sys 0m0.134s 00:12:10.574 19:26:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:10.574 19:26:36 -- common/autotest_common.sh@10 -- # set +x 00:12:10.574 ************************************ 00:12:10.574 END TEST nvme_startup 00:12:10.574 ************************************ 00:12:10.574 19:26:36 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:10.574 19:26:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:10.574 19:26:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:10.574 19:26:36 -- common/autotest_common.sh@10 -- # set +x 00:12:10.574 ************************************ 00:12:10.574 START TEST nvme_multi_secondary 00:12:10.574 ************************************ 00:12:10.574 19:26:36 -- common/autotest_common.sh@1111 -- # nvme_multi_secondary 00:12:10.574 19:26:36 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:10.574 19:26:36 -- nvme/nvme.sh@52 -- # pid0=70944 00:12:10.574 19:26:36 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:10.574 19:26:36 -- nvme/nvme.sh@54 -- # pid1=70945 00:12:10.574 19:26:36 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:13.855 Initializing NVMe Controllers 00:12:13.855 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:13.855 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:13.855 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:13.855 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:13.855 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:13.855 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:13.855 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:13.855 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:13.855 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:13.855 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:13.855 Initialization complete. Launching workers. 00:12:13.855 ======================================================== 00:12:13.855 Latency(us) 00:12:13.855 Device Information : IOPS MiB/s Average min max 00:12:13.855 PCIE (0000:00:10.0) NSID 1 from core 1: 5458.92 21.32 2928.58 1017.89 9191.40 00:12:13.855 PCIE (0000:00:11.0) NSID 1 from core 1: 5458.92 21.32 2930.71 1020.27 9650.63 00:12:13.855 PCIE (0000:00:13.0) NSID 1 from core 1: 5458.92 21.32 2931.00 1068.28 8847.07 00:12:13.855 PCIE (0000:00:12.0) NSID 1 from core 1: 5458.92 21.32 2931.13 1062.19 9101.96 00:12:13.855 PCIE (0000:00:12.0) NSID 2 from core 1: 5458.92 21.32 2931.36 1052.13 9250.04 00:12:13.855 PCIE (0000:00:12.0) NSID 3 from core 1: 5458.92 21.32 2931.70 1048.30 9055.83 00:12:13.855 ======================================================== 00:12:13.855 Total : 32753.53 127.94 2930.75 1017.89 9650.63 00:12:13.855 00:12:14.112 Initializing NVMe Controllers 00:12:14.112 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:14.112 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:14.112 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:14.112 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:14.112 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:14.112 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:14.112 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:14.112 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:14.112 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:14.112 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:14.112 Initialization complete. Launching workers. 00:12:14.112 ======================================================== 00:12:14.112 Latency(us) 00:12:14.112 Device Information : IOPS MiB/s Average min max 00:12:14.112 PCIE (0000:00:10.0) NSID 1 from core 2: 2848.73 11.13 5614.41 1388.34 14950.98 00:12:14.112 PCIE (0000:00:11.0) NSID 1 from core 2: 2848.73 11.13 5616.29 1424.15 15664.34 00:12:14.112 PCIE (0000:00:13.0) NSID 1 from core 2: 2848.73 11.13 5616.47 1269.26 15727.99 00:12:14.113 PCIE (0000:00:12.0) NSID 1 from core 2: 2848.73 11.13 5616.74 1278.29 16987.60 00:12:14.113 PCIE (0000:00:12.0) NSID 2 from core 2: 2848.73 11.13 5617.09 1288.33 14391.25 00:12:14.113 PCIE (0000:00:12.0) NSID 3 from core 2: 2848.73 11.13 5616.06 1421.35 13940.58 00:12:14.113 ======================================================== 00:12:14.113 Total : 17092.35 66.77 5616.18 1269.26 16987.60 00:12:14.113 00:12:14.113 19:26:39 -- nvme/nvme.sh@56 -- # wait 70944 00:12:16.011 Initializing NVMe Controllers 00:12:16.011 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:16.011 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:16.011 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:16.011 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:16.011 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:16.011 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:16.011 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:16.011 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:16.011 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:16.011 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:16.011 Initialization complete. Launching workers. 00:12:16.011 ======================================================== 00:12:16.011 Latency(us) 00:12:16.011 Device Information : IOPS MiB/s Average min max 00:12:16.011 PCIE (0000:00:10.0) NSID 1 from core 0: 8665.06 33.85 1844.74 777.38 8134.19 00:12:16.011 PCIE (0000:00:11.0) NSID 1 from core 0: 8665.06 33.85 1845.94 788.46 7933.12 00:12:16.011 PCIE (0000:00:13.0) NSID 1 from core 0: 8665.06 33.85 1845.88 804.26 7886.43 00:12:16.011 PCIE (0000:00:12.0) NSID 1 from core 0: 8665.06 33.85 1845.82 797.01 8142.82 00:12:16.011 PCIE (0000:00:12.0) NSID 2 from core 0: 8665.06 33.85 1845.76 785.40 8165.85 00:12:16.011 PCIE (0000:00:12.0) NSID 3 from core 0: 8665.06 33.85 1845.70 782.76 7902.64 00:12:16.011 ======================================================== 00:12:16.011 Total : 51990.39 203.09 1845.64 777.38 8165.85 00:12:16.011 00:12:16.011 19:26:41 -- nvme/nvme.sh@57 -- # wait 70945 00:12:16.011 19:26:41 -- nvme/nvme.sh@61 -- # pid0=71016 00:12:16.011 19:26:41 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:16.011 19:26:41 -- nvme/nvme.sh@63 -- # pid1=71017 00:12:16.011 19:26:41 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:16.011 19:26:41 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:20.196 Initializing NVMe Controllers 00:12:20.196 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:20.196 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:20.196 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:20.196 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:20.196 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:20.196 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:20.196 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:20.196 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:20.196 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:20.196 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:20.196 Initialization complete. Launching workers. 00:12:20.196 ======================================================== 00:12:20.196 Latency(us) 00:12:20.196 Device Information : IOPS MiB/s Average min max 00:12:20.196 PCIE (0000:00:10.0) NSID 1 from core 1: 5566.29 21.74 2872.19 929.79 9678.07 00:12:20.196 PCIE (0000:00:11.0) NSID 1 from core 1: 5566.29 21.74 2873.85 959.33 10387.63 00:12:20.196 PCIE (0000:00:13.0) NSID 1 from core 1: 5566.29 21.74 2874.25 959.60 9933.35 00:12:20.196 PCIE (0000:00:12.0) NSID 1 from core 1: 5566.29 21.74 2874.23 944.20 9936.12 00:12:20.196 PCIE (0000:00:12.0) NSID 2 from core 1: 5566.29 21.74 2874.49 945.97 9985.59 00:12:20.196 PCIE (0000:00:12.0) NSID 3 from core 1: 5571.62 21.76 2871.90 967.77 9851.53 00:12:20.196 ======================================================== 00:12:20.196 Total : 33403.09 130.48 2873.48 929.79 10387.63 00:12:20.196 00:12:20.196 Initializing NVMe Controllers 00:12:20.196 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:20.196 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:20.196 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:20.196 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:20.196 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:20.196 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:20.196 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:20.196 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:20.196 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:20.196 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:20.196 Initialization complete. Launching workers. 00:12:20.196 ======================================================== 00:12:20.196 Latency(us) 00:12:20.196 Device Information : IOPS MiB/s Average min max 00:12:20.196 PCIE (0000:00:10.0) NSID 1 from core 0: 5452.68 21.30 2931.91 980.52 10407.20 00:12:20.196 PCIE (0000:00:11.0) NSID 1 from core 0: 5452.68 21.30 2933.59 1060.72 10287.03 00:12:20.196 PCIE (0000:00:13.0) NSID 1 from core 0: 5452.68 21.30 2933.44 1050.50 10904.75 00:12:20.196 PCIE (0000:00:12.0) NSID 1 from core 0: 5452.68 21.30 2933.33 1029.64 10987.81 00:12:20.196 PCIE (0000:00:12.0) NSID 2 from core 0: 5452.68 21.30 2933.26 1039.03 10814.12 00:12:20.196 PCIE (0000:00:12.0) NSID 3 from core 0: 5452.68 21.30 2933.15 1040.97 10607.46 00:12:20.196 ======================================================== 00:12:20.196 Total : 32716.07 127.80 2933.11 980.52 10987.81 00:12:20.196 00:12:21.573 Initializing NVMe Controllers 00:12:21.573 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:21.573 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:21.573 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:21.573 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:21.573 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:21.573 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:21.573 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:21.573 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:21.573 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:21.573 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:21.573 Initialization complete. Launching workers. 00:12:21.573 ======================================================== 00:12:21.573 Latency(us) 00:12:21.573 Device Information : IOPS MiB/s Average min max 00:12:21.573 PCIE (0000:00:10.0) NSID 1 from core 2: 2979.86 11.64 5367.23 1056.97 23510.73 00:12:21.573 PCIE (0000:00:11.0) NSID 1 from core 2: 2979.86 11.64 5369.11 1075.16 23492.03 00:12:21.573 PCIE (0000:00:13.0) NSID 1 from core 2: 2979.86 11.64 5368.88 1079.92 27620.26 00:12:21.573 PCIE (0000:00:12.0) NSID 1 from core 2: 2979.86 11.64 5369.19 1068.37 23412.68 00:12:21.573 PCIE (0000:00:12.0) NSID 2 from core 2: 2979.86 11.64 5368.44 1092.35 23094.14 00:12:21.573 PCIE (0000:00:12.0) NSID 3 from core 2: 2983.06 11.65 5363.28 1082.38 23165.22 00:12:21.573 ======================================================== 00:12:21.573 Total : 17882.35 69.85 5367.69 1056.97 27620.26 00:12:21.573 00:12:21.573 ************************************ 00:12:21.573 END TEST nvme_multi_secondary 00:12:21.573 ************************************ 00:12:21.573 19:26:46 -- nvme/nvme.sh@65 -- # wait 71016 00:12:21.573 19:26:46 -- nvme/nvme.sh@66 -- # wait 71017 00:12:21.573 00:12:21.573 real 0m10.789s 00:12:21.573 user 0m18.484s 00:12:21.573 sys 0m0.824s 00:12:21.573 19:26:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:21.573 19:26:46 -- common/autotest_common.sh@10 -- # set +x 00:12:21.573 19:26:47 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:21.574 19:26:47 -- nvme/nvme.sh@102 -- # kill_stub 00:12:21.574 19:26:47 -- common/autotest_common.sh@1075 -- # [[ -e /proc/69879 ]] 00:12:21.574 19:26:47 -- common/autotest_common.sh@1076 -- # kill 69879 00:12:21.574 19:26:47 -- common/autotest_common.sh@1077 -- # wait 69879 00:12:21.574 [2024-04-24 19:26:47.042953] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.043903] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.044089] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.044132] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.049399] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.049518] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.049562] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.049593] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.054448] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.054545] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.054584] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.054616] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.058281] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.058377] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.058412] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.574 [2024-04-24 19:26:47.058433] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70879) is not found. Dropping the request. 00:12:21.833 19:26:47 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:12:21.833 19:26:47 -- common/autotest_common.sh@1083 -- # echo 2 00:12:21.833 19:26:47 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:21.833 19:26:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:21.833 19:26:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:21.833 19:26:47 -- common/autotest_common.sh@10 -- # set +x 00:12:21.833 ************************************ 00:12:21.833 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:21.833 ************************************ 00:12:21.833 19:26:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:22.093 * Looking for test storage... 00:12:22.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:22.093 19:26:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:22.093 19:26:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:22.093 19:26:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:22.093 19:26:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:22.093 19:26:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:22.093 19:26:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:22.093 19:26:47 -- common/autotest_common.sh@1510 -- # bdfs=() 00:12:22.093 19:26:47 -- common/autotest_common.sh@1510 -- # local bdfs 00:12:22.093 19:26:47 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:12:22.093 19:26:47 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:12:22.093 19:26:47 -- common/autotest_common.sh@1499 -- # bdfs=() 00:12:22.093 19:26:47 -- common/autotest_common.sh@1499 -- # local bdfs 00:12:22.093 19:26:47 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:22.093 19:26:47 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:22.093 19:26:47 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:12:22.093 19:26:47 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:12:22.093 19:26:47 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:22.093 19:26:47 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:12:22.093 19:26:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:12:22.093 19:26:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:12:22.093 19:26:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=71176 00:12:22.093 19:26:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:22.093 19:26:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:22.093 19:26:47 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 71176 00:12:22.093 19:26:47 -- common/autotest_common.sh@817 -- # '[' -z 71176 ']' 00:12:22.093 19:26:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.093 19:26:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:22.093 19:26:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.093 19:26:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:22.093 19:26:47 -- common/autotest_common.sh@10 -- # set +x 00:12:22.351 [2024-04-24 19:26:47.772204] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:12:22.351 [2024-04-24 19:26:47.772451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71176 ] 00:12:22.351 [2024-04-24 19:26:47.961985] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.609 [2024-04-24 19:26:48.241197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.609 [2024-04-24 19:26:48.241355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.609 [2024-04-24 19:26:48.241449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.609 [2024-04-24 19:26:48.241423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.038 19:26:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:24.038 19:26:49 -- common/autotest_common.sh@850 -- # return 0 00:12:24.038 19:26:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:24.038 19:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.038 19:26:49 -- common/autotest_common.sh@10 -- # set +x 00:12:24.038 nvme0n1 00:12:24.038 19:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.038 19:26:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:24.038 19:26:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_yDpAt.txt 00:12:24.038 19:26:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:24.038 19:26:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:24.038 19:26:49 -- common/autotest_common.sh@10 -- # set +x 00:12:24.038 true 00:12:24.038 19:26:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:24.038 19:26:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:24.038 19:26:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1713986809 00:12:24.038 19:26:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:24.038 19:26:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=71210 00:12:24.038 19:26:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:24.038 19:26:49 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:25.949 19:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.949 19:26:51 -- common/autotest_common.sh@10 -- # set +x 00:12:25.949 [2024-04-24 19:26:51.473619] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:12:25.949 [2024-04-24 19:26:51.474152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:25.949 [2024-04-24 19:26:51.474233] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:25.949 [2024-04-24 19:26:51.474299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.949 [2024-04-24 19:26:51.476845] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:25.949 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 71210 00:12:25.949 19:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 71210 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 71210 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:25.949 19:26:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.949 19:26:51 -- common/autotest_common.sh@10 -- # set +x 00:12:25.949 19:26:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_yDpAt.txt 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_yDpAt.txt 00:12:25.949 19:26:51 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 71176 00:12:25.949 19:26:51 -- common/autotest_common.sh@936 -- # '[' -z 71176 ']' 00:12:25.949 19:26:51 -- common/autotest_common.sh@940 -- # kill -0 71176 00:12:25.949 19:26:51 -- common/autotest_common.sh@941 -- # uname 00:12:25.949 19:26:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:25.949 19:26:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71176 00:12:26.207 killing process with pid 71176 00:12:26.207 19:26:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:26.207 19:26:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:26.207 19:26:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71176' 00:12:26.207 19:26:51 -- common/autotest_common.sh@955 -- # kill 71176 00:12:26.207 19:26:51 -- common/autotest_common.sh@960 -- # wait 71176 00:12:28.737 19:26:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:28.737 19:26:54 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:28.737 00:12:28.737 real 0m6.953s 00:12:28.737 user 0m23.897s 00:12:28.737 sys 0m0.751s 00:12:28.997 ************************************ 00:12:28.997 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:28.997 ************************************ 00:12:28.997 19:26:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:28.997 19:26:54 -- common/autotest_common.sh@10 -- # set +x 00:12:28.997 19:26:54 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:28.997 19:26:54 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:28.997 19:26:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:28.997 19:26:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:28.997 19:26:54 -- common/autotest_common.sh@10 -- # set +x 00:12:28.997 ************************************ 00:12:28.997 START TEST nvme_fio 00:12:28.997 ************************************ 00:12:28.997 19:26:54 -- common/autotest_common.sh@1111 -- # nvme_fio_test 00:12:28.997 19:26:54 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:28.997 19:26:54 -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:28.997 19:26:54 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:28.997 19:26:54 -- common/autotest_common.sh@1499 -- # bdfs=() 00:12:28.997 19:26:54 -- common/autotest_common.sh@1499 -- # local bdfs 00:12:28.997 19:26:54 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:28.997 19:26:54 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:28.997 19:26:54 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:12:28.997 19:26:54 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:12:28.997 19:26:54 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:28.997 19:26:54 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:28.997 19:26:54 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:28.997 19:26:54 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:28.997 19:26:54 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:28.997 19:26:54 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:29.566 19:26:54 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:29.566 19:26:54 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:29.566 19:26:55 -- nvme/nvme.sh@41 -- # bs=4096 00:12:29.566 19:26:55 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:29.566 19:26:55 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:29.566 19:26:55 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:12:29.566 19:26:55 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:29.566 19:26:55 -- common/autotest_common.sh@1325 -- # local sanitizers 00:12:29.566 19:26:55 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:29.566 19:26:55 -- common/autotest_common.sh@1327 -- # shift 00:12:29.566 19:26:55 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:12:29.566 19:26:55 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:12:29.566 19:26:55 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:29.566 19:26:55 -- common/autotest_common.sh@1331 -- # grep libasan 00:12:29.566 19:26:55 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:12:29.825 19:26:55 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:29.825 19:26:55 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:29.825 19:26:55 -- common/autotest_common.sh@1333 -- # break 00:12:29.825 19:26:55 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:29.825 19:26:55 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:29.825 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:29.825 fio-3.35 00:12:29.825 Starting 1 thread 00:12:36.471 00:12:36.471 test: (groupid=0, jobs=1): err= 0: pid=71365: Wed Apr 24 19:27:01 2024 00:12:36.471 read: IOPS=22.6k, BW=88.2MiB/s (92.5MB/s)(177MiB/2001msec) 00:12:36.471 slat (nsec): min=4005, max=57006, avg=5411.18, stdev=1539.09 00:12:36.471 clat (usec): min=210, max=9617, avg=2817.39, stdev=527.84 00:12:36.471 lat (usec): min=215, max=9655, avg=2822.80, stdev=528.73 00:12:36.471 clat percentiles (usec): 00:12:36.471 | 1.00th=[ 2409], 5.00th=[ 2507], 10.00th=[ 2573], 20.00th=[ 2638], 00:12:36.471 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:12:36.471 | 70.00th=[ 2802], 80.00th=[ 2868], 90.00th=[ 2966], 95.00th=[ 3294], 00:12:36.471 | 99.00th=[ 5669], 99.50th=[ 6980], 99.90th=[ 8848], 99.95th=[ 8979], 00:12:36.471 | 99.99th=[ 9372] 00:12:36.471 bw ( KiB/s): min=85152, max=92556, per=98.74%, avg=89214.67, stdev=3754.34, samples=3 00:12:36.471 iops : min=21288, max=23139, avg=22303.67, stdev=938.58, samples=3 00:12:36.471 write: IOPS=22.5k, BW=87.8MiB/s (92.0MB/s)(176MiB/2001msec); 0 zone resets 00:12:36.471 slat (nsec): min=4206, max=56137, avg=5583.64, stdev=1571.25 00:12:36.471 clat (usec): min=295, max=9463, avg=2826.09, stdev=520.89 00:12:36.471 lat (usec): min=301, max=9479, avg=2831.67, stdev=521.74 00:12:36.471 clat percentiles (usec): 00:12:36.471 | 1.00th=[ 2409], 5.00th=[ 2540], 10.00th=[ 2573], 20.00th=[ 2638], 00:12:36.471 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2769], 00:12:36.471 | 70.00th=[ 2802], 80.00th=[ 2868], 90.00th=[ 2999], 95.00th=[ 3326], 00:12:36.471 | 99.00th=[ 5604], 99.50th=[ 6783], 99.90th=[ 8848], 99.95th=[ 8979], 00:12:36.471 | 99.99th=[ 9110] 00:12:36.471 bw ( KiB/s): min=85448, max=93502, per=99.51%, avg=89431.33, stdev=4027.71, samples=3 00:12:36.471 iops : min=21362, max=23375, avg=22357.67, stdev=1006.67, samples=3 00:12:36.471 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:36.471 lat (msec) : 2=0.15%, 4=97.75%, 10=2.07% 00:12:36.471 cpu : usr=99.20%, sys=0.05%, ctx=2, majf=0, minf=605 00:12:36.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:36.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:36.471 issued rwts: total=45200,44958,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:36.471 00:12:36.471 Run status group 0 (all jobs): 00:12:36.471 READ: bw=88.2MiB/s (92.5MB/s), 88.2MiB/s-88.2MiB/s (92.5MB/s-92.5MB/s), io=177MiB (185MB), run=2001-2001msec 00:12:36.471 WRITE: bw=87.8MiB/s (92.0MB/s), 87.8MiB/s-87.8MiB/s (92.0MB/s-92.0MB/s), io=176MiB (184MB), run=2001-2001msec 00:12:36.471 ----------------------------------------------------- 00:12:36.471 Suppressions used: 00:12:36.471 count bytes template 00:12:36.471 1 32 /usr/src/fio/parse.c 00:12:36.471 1 8 libtcmalloc_minimal.so 00:12:36.471 ----------------------------------------------------- 00:12:36.471 00:12:36.471 19:27:01 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:36.471 19:27:01 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:36.471 19:27:01 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:36.471 19:27:01 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:36.471 19:27:01 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:36.471 19:27:01 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:36.471 19:27:01 -- nvme/nvme.sh@41 -- # bs=4096 00:12:36.471 19:27:01 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:36.471 19:27:01 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:36.471 19:27:01 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:12:36.471 19:27:01 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:36.471 19:27:01 -- common/autotest_common.sh@1325 -- # local sanitizers 00:12:36.471 19:27:01 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:36.471 19:27:01 -- common/autotest_common.sh@1327 -- # shift 00:12:36.471 19:27:01 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:12:36.471 19:27:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:12:36.471 19:27:01 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:36.471 19:27:01 -- common/autotest_common.sh@1331 -- # grep libasan 00:12:36.471 19:27:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:12:36.471 19:27:01 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:36.471 19:27:01 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:36.471 19:27:01 -- common/autotest_common.sh@1333 -- # break 00:12:36.471 19:27:01 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:36.471 19:27:01 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:36.471 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:36.472 fio-3.35 00:12:36.472 Starting 1 thread 00:12:43.035 00:12:43.035 test: (groupid=0, jobs=1): err= 0: pid=71459: Wed Apr 24 19:27:07 2024 00:12:43.035 read: IOPS=21.7k, BW=84.9MiB/s (89.1MB/s)(170MiB/2001msec) 00:12:43.035 slat (nsec): min=4429, max=60957, avg=5595.90, stdev=1646.38 00:12:43.035 clat (usec): min=240, max=10550, avg=2933.68, stdev=609.44 00:12:43.035 lat (usec): min=245, max=10611, avg=2939.28, stdev=610.32 00:12:43.035 clat percentiles (usec): 00:12:43.035 | 1.00th=[ 2040], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2704], 00:12:43.035 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2868], 00:12:43.035 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3326], 95.00th=[ 3654], 00:12:43.035 | 99.00th=[ 5538], 99.50th=[ 7177], 99.90th=[ 9634], 99.95th=[ 9765], 00:12:43.035 | 99.99th=[10290] 00:12:43.035 bw ( KiB/s): min=83520, max=89904, per=99.33%, avg=86386.67, stdev=3241.36, samples=3 00:12:43.035 iops : min=20880, max=22476, avg=21596.67, stdev=810.34, samples=3 00:12:43.035 write: IOPS=21.6k, BW=84.3MiB/s (88.4MB/s)(169MiB/2001msec); 0 zone resets 00:12:43.035 slat (nsec): min=4540, max=64224, avg=5733.72, stdev=1629.73 00:12:43.035 clat (usec): min=224, max=10513, avg=2935.85, stdev=610.06 00:12:43.035 lat (usec): min=229, max=10518, avg=2941.59, stdev=610.97 00:12:43.035 clat percentiles (usec): 00:12:43.035 | 1.00th=[ 2008], 5.00th=[ 2606], 10.00th=[ 2671], 20.00th=[ 2704], 00:12:43.035 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2868], 00:12:43.035 | 70.00th=[ 2900], 80.00th=[ 2966], 90.00th=[ 3326], 95.00th=[ 3654], 00:12:43.035 | 99.00th=[ 5538], 99.50th=[ 7242], 99.90th=[ 9503], 99.95th=[ 9634], 00:12:43.035 | 99.99th=[10159] 00:12:43.035 bw ( KiB/s): min=83472, max=90512, per=100.00%, avg=86570.67, stdev=3594.85, samples=3 00:12:43.035 iops : min=20868, max=22628, avg=21642.67, stdev=898.71, samples=3 00:12:43.035 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:12:43.035 lat (msec) : 2=0.88%, 4=96.38%, 10=2.67%, 20=0.02% 00:12:43.035 cpu : usr=98.90%, sys=0.25%, ctx=4, majf=0, minf=605 00:12:43.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:43.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:43.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:43.035 issued rwts: total=43505,43188,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:43.035 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:43.035 00:12:43.035 Run status group 0 (all jobs): 00:12:43.035 READ: bw=84.9MiB/s (89.1MB/s), 84.9MiB/s-84.9MiB/s (89.1MB/s-89.1MB/s), io=170MiB (178MB), run=2001-2001msec 00:12:43.035 WRITE: bw=84.3MiB/s (88.4MB/s), 84.3MiB/s-84.3MiB/s (88.4MB/s-88.4MB/s), io=169MiB (177MB), run=2001-2001msec 00:12:43.035 ----------------------------------------------------- 00:12:43.035 Suppressions used: 00:12:43.035 count bytes template 00:12:43.035 1 32 /usr/src/fio/parse.c 00:12:43.035 1 8 libtcmalloc_minimal.so 00:12:43.035 ----------------------------------------------------- 00:12:43.035 00:12:43.035 19:27:07 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:43.035 19:27:07 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:43.035 19:27:07 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:43.035 19:27:07 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:43.035 19:27:08 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:43.035 19:27:08 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:43.035 19:27:08 -- nvme/nvme.sh@41 -- # bs=4096 00:12:43.035 19:27:08 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:43.035 19:27:08 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:43.035 19:27:08 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:12:43.035 19:27:08 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:43.035 19:27:08 -- common/autotest_common.sh@1325 -- # local sanitizers 00:12:43.035 19:27:08 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:43.035 19:27:08 -- common/autotest_common.sh@1327 -- # shift 00:12:43.035 19:27:08 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:12:43.035 19:27:08 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:12:43.035 19:27:08 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:43.035 19:27:08 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:12:43.035 19:27:08 -- common/autotest_common.sh@1331 -- # grep libasan 00:12:43.035 19:27:08 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:43.035 19:27:08 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:43.035 19:27:08 -- common/autotest_common.sh@1333 -- # break 00:12:43.035 19:27:08 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:43.035 19:27:08 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:43.035 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:43.035 fio-3.35 00:12:43.035 Starting 1 thread 00:12:49.610 00:12:49.610 test: (groupid=0, jobs=1): err= 0: pid=71547: Wed Apr 24 19:27:15 2024 00:12:49.610 read: IOPS=20.1k, BW=78.4MiB/s (82.2MB/s)(157MiB/2001msec) 00:12:49.610 slat (usec): min=4, max=138, avg= 5.95, stdev= 2.25 00:12:49.610 clat (usec): min=245, max=9242, avg=3181.79, stdev=829.65 00:12:49.610 lat (usec): min=250, max=9248, avg=3187.73, stdev=831.02 00:12:49.610 clat percentiles (usec): 00:12:49.611 | 1.00th=[ 2507], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2704], 00:12:49.611 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2933], 00:12:49.611 | 70.00th=[ 3294], 80.00th=[ 3720], 90.00th=[ 3949], 95.00th=[ 4293], 00:12:49.611 | 99.00th=[ 7373], 99.50th=[ 8094], 99.90th=[ 8717], 99.95th=[ 8848], 00:12:49.611 | 99.99th=[ 8979] 00:12:49.611 bw ( KiB/s): min=69216, max=86560, per=95.95%, avg=76994.67, stdev=8808.96, samples=3 00:12:49.611 iops : min=17304, max=21640, avg=19248.67, stdev=2202.24, samples=3 00:12:49.611 write: IOPS=20.0k, BW=78.2MiB/s (82.0MB/s)(156MiB/2001msec); 0 zone resets 00:12:49.611 slat (nsec): min=4481, max=71459, avg=6114.57, stdev=2168.14 00:12:49.611 clat (usec): min=228, max=8921, avg=3183.47, stdev=833.43 00:12:49.611 lat (usec): min=234, max=8931, avg=3189.59, stdev=834.80 00:12:49.611 clat percentiles (usec): 00:12:49.611 | 1.00th=[ 2507], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2704], 00:12:49.611 | 30.00th=[ 2737], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2933], 00:12:49.611 | 70.00th=[ 3294], 80.00th=[ 3720], 90.00th=[ 3949], 95.00th=[ 4293], 00:12:49.611 | 99.00th=[ 7439], 99.50th=[ 8094], 99.90th=[ 8717], 99.95th=[ 8717], 00:12:49.611 | 99.99th=[ 8848] 00:12:49.611 bw ( KiB/s): min=69584, max=86432, per=96.25%, avg=77058.67, stdev=8582.98, samples=3 00:12:49.611 iops : min=17396, max=21608, avg=19264.67, stdev=2145.74, samples=3 00:12:49.611 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:49.611 lat (msec) : 2=0.06%, 4=91.34%, 10=8.56% 00:12:49.611 cpu : usr=98.85%, sys=0.20%, ctx=15, majf=0, minf=606 00:12:49.611 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:49.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:49.611 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:49.611 issued rwts: total=40140,40049,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:49.611 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:49.611 00:12:49.611 Run status group 0 (all jobs): 00:12:49.611 READ: bw=78.4MiB/s (82.2MB/s), 78.4MiB/s-78.4MiB/s (82.2MB/s-82.2MB/s), io=157MiB (164MB), run=2001-2001msec 00:12:49.611 WRITE: bw=78.2MiB/s (82.0MB/s), 78.2MiB/s-78.2MiB/s (82.0MB/s-82.0MB/s), io=156MiB (164MB), run=2001-2001msec 00:12:49.870 ----------------------------------------------------- 00:12:49.870 Suppressions used: 00:12:49.870 count bytes template 00:12:49.870 1 32 /usr/src/fio/parse.c 00:12:49.870 1 8 libtcmalloc_minimal.so 00:12:49.870 ----------------------------------------------------- 00:12:49.870 00:12:49.870 19:27:15 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:49.870 19:27:15 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:49.870 19:27:15 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:49.870 19:27:15 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:50.129 19:27:15 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:50.129 19:27:15 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:50.389 19:27:16 -- nvme/nvme.sh@41 -- # bs=4096 00:12:50.389 19:27:16 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:50.389 19:27:16 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:50.389 19:27:16 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:12:50.389 19:27:16 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:50.389 19:27:16 -- common/autotest_common.sh@1325 -- # local sanitizers 00:12:50.389 19:27:16 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:50.389 19:27:16 -- common/autotest_common.sh@1327 -- # shift 00:12:50.389 19:27:16 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:12:50.389 19:27:16 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:12:50.389 19:27:16 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:50.389 19:27:16 -- common/autotest_common.sh@1331 -- # grep libasan 00:12:50.389 19:27:16 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:12:50.389 19:27:16 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:50.389 19:27:16 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:50.389 19:27:16 -- common/autotest_common.sh@1333 -- # break 00:12:50.389 19:27:16 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:50.389 19:27:16 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:50.649 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:50.649 fio-3.35 00:12:50.649 Starting 1 thread 00:13:02.858 00:13:02.858 test: (groupid=0, jobs=1): err= 0: pid=71646: Wed Apr 24 19:27:26 2024 00:13:02.858 read: IOPS=20.5k, BW=80.1MiB/s (84.0MB/s)(160MiB/2001msec) 00:13:02.858 slat (usec): min=4, max=718, avg= 5.96, stdev= 4.27 00:13:02.858 clat (usec): min=761, max=11532, avg=3110.10, stdev=1072.05 00:13:02.858 lat (usec): min=776, max=11549, avg=3116.05, stdev=1073.69 00:13:02.858 clat percentiles (usec): 00:13:02.858 | 1.00th=[ 1729], 5.00th=[ 2474], 10.00th=[ 2606], 20.00th=[ 2671], 00:13:02.858 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:13:02.858 | 70.00th=[ 2900], 80.00th=[ 3195], 90.00th=[ 3916], 95.00th=[ 5080], 00:13:02.858 | 99.00th=[ 8586], 99.50th=[ 9110], 99.90th=[10814], 99.95th=[11338], 00:13:02.858 | 99.99th=[11469] 00:13:02.858 bw ( KiB/s): min=80232, max=88368, per=100.00%, avg=83224.00, stdev=4474.59, samples=3 00:13:02.858 iops : min=20058, max=22092, avg=20806.00, stdev=1118.65, samples=3 00:13:02.858 write: IOPS=20.5k, BW=79.9MiB/s (83.8MB/s)(160MiB/2001msec); 0 zone resets 00:13:02.858 slat (nsec): min=4525, max=90661, avg=6102.81, stdev=2511.63 00:13:02.858 clat (usec): min=995, max=11528, avg=3107.56, stdev=1058.75 00:13:02.858 lat (usec): min=1000, max=11547, avg=3113.66, stdev=1060.31 00:13:02.858 clat percentiles (usec): 00:13:02.858 | 1.00th=[ 1696], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2671], 00:13:02.858 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:13:02.858 | 70.00th=[ 2900], 80.00th=[ 3195], 90.00th=[ 3916], 95.00th=[ 5080], 00:13:02.858 | 99.00th=[ 8455], 99.50th=[ 9110], 99.90th=[10552], 99.95th=[11207], 00:13:02.858 | 99.99th=[11469] 00:13:02.858 bw ( KiB/s): min=80104, max=88336, per=100.00%, avg=83248.00, stdev=4447.00, samples=3 00:13:02.858 iops : min=20026, max=22084, avg=20812.00, stdev=1111.75, samples=3 00:13:02.858 lat (usec) : 1000=0.01% 00:13:02.858 lat (msec) : 2=2.02%, 4=89.08%, 10=8.67%, 20=0.23% 00:13:02.858 cpu : usr=98.75%, sys=0.35%, ctx=3, majf=0, minf=603 00:13:02.858 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:13:02.858 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:02.858 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:02.858 issued rwts: total=41047,40939,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:02.858 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:02.858 00:13:02.858 Run status group 0 (all jobs): 00:13:02.858 READ: bw=80.1MiB/s (84.0MB/s), 80.1MiB/s-80.1MiB/s (84.0MB/s-84.0MB/s), io=160MiB (168MB), run=2001-2001msec 00:13:02.858 WRITE: bw=79.9MiB/s (83.8MB/s), 79.9MiB/s-79.9MiB/s (83.8MB/s-83.8MB/s), io=160MiB (168MB), run=2001-2001msec 00:13:02.858 ----------------------------------------------------- 00:13:02.858 Suppressions used: 00:13:02.858 count bytes template 00:13:02.858 1 32 /usr/src/fio/parse.c 00:13:02.858 1 8 libtcmalloc_minimal.so 00:13:02.858 ----------------------------------------------------- 00:13:02.858 00:13:02.858 19:27:26 -- nvme/nvme.sh@44 -- # ran_fio=true 00:13:02.858 19:27:26 -- nvme/nvme.sh@46 -- # true 00:13:02.858 00:13:02.858 real 0m32.151s 00:13:02.858 user 0m17.037s 00:13:02.858 sys 0m28.660s 00:13:02.858 ************************************ 00:13:02.858 END TEST nvme_fio 00:13:02.858 ************************************ 00:13:02.858 19:27:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:02.858 19:27:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.858 ************************************ 00:13:02.858 END TEST nvme 00:13:02.858 ************************************ 00:13:02.858 00:13:02.858 real 1m48.025s 00:13:02.858 user 3m53.883s 00:13:02.858 sys 0m40.763s 00:13:02.858 19:27:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:02.858 19:27:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.858 19:27:26 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:13:02.858 19:27:26 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:02.858 19:27:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:02.858 19:27:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:02.858 19:27:26 -- common/autotest_common.sh@10 -- # set +x 00:13:02.858 ************************************ 00:13:02.858 START TEST nvme_scc 00:13:02.858 ************************************ 00:13:02.858 19:27:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:13:02.858 * Looking for test storage... 00:13:02.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:02.858 19:27:27 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:02.858 19:27:27 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:02.858 19:27:27 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:02.858 19:27:27 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:02.858 19:27:27 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:02.858 19:27:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:02.858 19:27:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:02.858 19:27:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:02.858 19:27:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.858 19:27:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.858 19:27:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.858 19:27:27 -- paths/export.sh@5 -- # export PATH 00:13:02.858 19:27:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:02.858 19:27:27 -- nvme/functions.sh@10 -- # ctrls=() 00:13:02.858 19:27:27 -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:02.858 19:27:27 -- nvme/functions.sh@11 -- # nvmes=() 00:13:02.858 19:27:27 -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:02.858 19:27:27 -- nvme/functions.sh@12 -- # bdfs=() 00:13:02.858 19:27:27 -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:02.858 19:27:27 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:02.858 19:27:27 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:02.858 19:27:27 -- nvme/functions.sh@14 -- # nvme_name= 00:13:02.858 19:27:27 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:02.858 19:27:27 -- nvme/nvme_scc.sh@12 -- # uname 00:13:02.858 19:27:27 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:13:02.858 19:27:27 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:13:02.858 19:27:27 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:02.858 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:02.858 Waiting for block devices as requested 00:13:02.858 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:02.858 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:02.858 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:02.858 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:08.144 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:08.144 19:27:33 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:13:08.144 19:27:33 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:08.144 19:27:33 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:08.144 19:27:33 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:08.144 19:27:33 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:08.144 19:27:33 -- scripts/common.sh@15 -- # local i 00:13:08.144 19:27:33 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:08.144 19:27:33 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:08.144 19:27:33 -- scripts/common.sh@24 -- # return 0 00:13:08.144 19:27:33 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:08.144 19:27:33 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:08.144 19:27:33 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@18 -- # shift 00:13:08.144 19:27:33 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.144 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.144 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:08.144 19:27:33 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.145 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.145 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:08.145 19:27:33 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.146 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.146 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:08.146 19:27:33 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:08.147 19:27:33 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:08.147 19:27:33 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:08.147 19:27:33 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:08.147 19:27:33 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@18 -- # shift 00:13:08.147 19:27:33 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.147 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:08.147 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:08.147 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:08.148 19:27:33 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:08.148 19:27:33 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:08.148 19:27:33 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:08.148 19:27:33 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:08.148 19:27:33 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:08.148 19:27:33 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:08.148 19:27:33 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:08.148 19:27:33 -- scripts/common.sh@15 -- # local i 00:13:08.148 19:27:33 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:08.148 19:27:33 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:08.148 19:27:33 -- scripts/common.sh@24 -- # return 0 00:13:08.148 19:27:33 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:08.148 19:27:33 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:08.148 19:27:33 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@18 -- # shift 00:13:08.148 19:27:33 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.148 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.148 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:08.148 19:27:33 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.149 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.149 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:08.149 19:27:33 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.150 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.150 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:08.150 19:27:33 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:08.151 19:27:33 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:08.151 19:27:33 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:08.151 19:27:33 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:08.151 19:27:33 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@18 -- # shift 00:13:08.151 19:27:33 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.151 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:08.151 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:08.151 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:08.152 19:27:33 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:08.152 19:27:33 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:08.152 19:27:33 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:08.152 19:27:33 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:08.152 19:27:33 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:08.152 19:27:33 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:08.152 19:27:33 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:08.152 19:27:33 -- scripts/common.sh@15 -- # local i 00:13:08.152 19:27:33 -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:13:08.152 19:27:33 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:08.152 19:27:33 -- scripts/common.sh@24 -- # return 0 00:13:08.152 19:27:33 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:08.152 19:27:33 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:08.152 19:27:33 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@18 -- # shift 00:13:08.152 19:27:33 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.152 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:08.152 19:27:33 -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.152 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.153 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.153 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:08.153 19:27:33 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.154 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:08.154 19:27:33 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.154 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:08.155 19:27:33 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:08.155 19:27:33 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:08.155 19:27:33 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:08.155 19:27:33 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@18 -- # shift 00:13:08.155 19:27:33 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:08.155 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.155 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.155 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:08.156 19:27:33 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.156 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.156 19:27:33 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:08.156 19:27:33 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:08.156 19:27:33 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:08.156 19:27:33 -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:08.157 19:27:33 -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:08.157 19:27:33 -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@18 -- # shift 00:13:08.157 19:27:33 -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.157 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:08.157 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:08.157 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:08.158 19:27:33 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:08.158 19:27:33 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:08.158 19:27:33 -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:08.158 19:27:33 -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@18 -- # shift 00:13:08.158 19:27:33 -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.158 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.158 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:08.158 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:08.159 19:27:33 -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.159 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.159 19:27:33 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:08.159 19:27:33 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:08.159 19:27:33 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:08.159 19:27:33 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:08.159 19:27:33 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:08.159 19:27:33 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:08.159 19:27:33 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:08.159 19:27:33 -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:08.159 19:27:33 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:08.159 19:27:33 -- scripts/common.sh@15 -- # local i 00:13:08.159 19:27:33 -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:13:08.159 19:27:33 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:08.159 19:27:33 -- scripts/common.sh@24 -- # return 0 00:13:08.159 19:27:33 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:08.160 19:27:33 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:08.160 19:27:33 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@18 -- # shift 00:13:08.160 19:27:33 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.160 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.160 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.160 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.161 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:08.161 19:27:33 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.161 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:08.162 19:27:33 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # IFS=: 00:13:08.162 19:27:33 -- nvme/functions.sh@21 -- # read -r reg val 00:13:08.162 19:27:33 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:08.162 19:27:33 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:08.162 19:27:33 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:08.162 19:27:33 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:08.162 19:27:33 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:08.162 19:27:33 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:08.162 19:27:33 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:13:08.162 19:27:33 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:13:08.162 19:27:33 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:08.162 19:27:33 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:13:08.162 19:27:33 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:08.162 19:27:33 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:13:08.162 19:27:33 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:13:08.162 19:27:33 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:08.162 19:27:33 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:13:08.162 19:27:33 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:13:08.162 19:27:33 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:13:08.162 19:27:33 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:13:08.162 19:27:33 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:13:08.162 19:27:33 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:13:08.162 19:27:33 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:08.162 19:27:33 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:08.162 19:27:33 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:08.162 19:27:33 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:08.162 19:27:33 -- nvme/functions.sh@197 -- # echo nvme1 00:13:08.162 19:27:33 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:08.162 19:27:33 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:13:08.162 19:27:33 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:13:08.162 19:27:33 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:13:08.162 19:27:33 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:13:08.162 19:27:33 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:13:08.162 19:27:33 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:13:08.162 19:27:33 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:08.162 19:27:33 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:08.162 19:27:33 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:08.162 19:27:33 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:08.162 19:27:33 -- nvme/functions.sh@197 -- # echo nvme0 00:13:08.162 19:27:33 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:08.162 19:27:33 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:13:08.162 19:27:33 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:13:08.162 19:27:33 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:13:08.162 19:27:33 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:13:08.162 19:27:33 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:13:08.162 19:27:33 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:13:08.162 19:27:33 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:08.162 19:27:33 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:08.162 19:27:33 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:08.162 19:27:33 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:08.162 19:27:33 -- nvme/functions.sh@197 -- # echo nvme3 00:13:08.162 19:27:33 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:08.162 19:27:33 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:13:08.162 19:27:33 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:13:08.162 19:27:33 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:13:08.162 19:27:33 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:13:08.162 19:27:33 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:13:08.162 19:27:33 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:13:08.162 19:27:33 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:08.162 19:27:33 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:13:08.162 19:27:33 -- nvme/functions.sh@76 -- # echo 0x15d 00:13:08.162 19:27:33 -- nvme/functions.sh@184 -- # oncs=0x15d 00:13:08.162 19:27:33 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:13:08.162 19:27:33 -- nvme/functions.sh@197 -- # echo nvme2 00:13:08.162 19:27:33 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:13:08.162 19:27:33 -- nvme/functions.sh@206 -- # echo nvme1 00:13:08.162 19:27:33 -- nvme/functions.sh@207 -- # return 0 00:13:08.162 19:27:33 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:13:08.162 19:27:33 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:13:08.162 19:27:33 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:08.732 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:09.299 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:09.558 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:09.558 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:09.558 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:09.558 19:27:35 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:09.558 19:27:35 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:09.558 19:27:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:09.558 19:27:35 -- common/autotest_common.sh@10 -- # set +x 00:13:09.818 ************************************ 00:13:09.818 START TEST nvme_simple_copy 00:13:09.818 ************************************ 00:13:09.818 19:27:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:10.078 Initializing NVMe Controllers 00:13:10.078 Attaching to 0000:00:10.0 00:13:10.078 Controller supports SCC. Attached to 0000:00:10.0 00:13:10.078 Namespace ID: 1 size: 6GB 00:13:10.078 Initialization complete. 00:13:10.078 00:13:10.078 Controller QEMU NVMe Ctrl (12340 ) 00:13:10.078 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:10.078 Namespace Block Size:4096 00:13:10.078 Writing LBAs 0 to 63 with Random Data 00:13:10.078 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:10.078 LBAs matching Written Data: 64 00:13:10.078 ************************************ 00:13:10.078 END TEST nvme_simple_copy 00:13:10.078 ************************************ 00:13:10.078 00:13:10.078 real 0m0.290s 00:13:10.078 user 0m0.108s 00:13:10.078 sys 0m0.081s 00:13:10.078 19:27:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:10.078 19:27:35 -- common/autotest_common.sh@10 -- # set +x 00:13:10.078 ************************************ 00:13:10.078 END TEST nvme_scc 00:13:10.078 ************************************ 00:13:10.078 00:13:10.078 real 0m8.743s 00:13:10.078 user 0m1.427s 00:13:10.078 sys 0m2.297s 00:13:10.078 19:27:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:10.078 19:27:35 -- common/autotest_common.sh@10 -- # set +x 00:13:10.078 19:27:35 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:13:10.078 19:27:35 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:13:10.078 19:27:35 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:13:10.078 19:27:35 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:13:10.078 19:27:35 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:10.078 19:27:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:10.078 19:27:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:10.078 19:27:35 -- common/autotest_common.sh@10 -- # set +x 00:13:10.338 ************************************ 00:13:10.338 START TEST nvme_fdp 00:13:10.338 ************************************ 00:13:10.338 19:27:35 -- common/autotest_common.sh@1111 -- # test/nvme/nvme_fdp.sh 00:13:10.338 * Looking for test storage... 00:13:10.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:10.338 19:27:35 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:10.338 19:27:35 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:10.338 19:27:35 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:10.338 19:27:35 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:10.338 19:27:35 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:10.338 19:27:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.338 19:27:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.338 19:27:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.338 19:27:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.338 19:27:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.339 19:27:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.339 19:27:35 -- paths/export.sh@5 -- # export PATH 00:13:10.339 19:27:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.339 19:27:35 -- nvme/functions.sh@10 -- # ctrls=() 00:13:10.339 19:27:35 -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:10.339 19:27:35 -- nvme/functions.sh@11 -- # nvmes=() 00:13:10.339 19:27:35 -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:10.339 19:27:35 -- nvme/functions.sh@12 -- # bdfs=() 00:13:10.339 19:27:35 -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:10.339 19:27:35 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:10.339 19:27:35 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:10.339 19:27:35 -- nvme/functions.sh@14 -- # nvme_name= 00:13:10.339 19:27:35 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:10.339 19:27:35 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:10.908 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:11.168 Waiting for block devices as requested 00:13:11.168 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:11.168 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:11.427 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:11.427 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:16.711 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:16.711 19:27:42 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:16.711 19:27:42 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:16.711 19:27:42 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:16.711 19:27:42 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:16.711 19:27:42 -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:16.711 19:27:42 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:16.711 19:27:42 -- scripts/common.sh@15 -- # local i 00:13:16.711 19:27:42 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:13:16.711 19:27:42 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:16.711 19:27:42 -- scripts/common.sh@24 -- # return 0 00:13:16.712 19:27:42 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:16.712 19:27:42 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:16.712 19:27:42 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@18 -- # shift 00:13:16.712 19:27:42 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.712 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.712 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:16.712 19:27:42 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.713 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.713 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.713 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:16.714 19:27:42 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:16.714 19:27:42 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:16.714 19:27:42 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:16.714 19:27:42 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@18 -- # shift 00:13:16.714 19:27:42 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:16.714 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.714 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.714 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.715 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:16.715 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:16.715 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:16.716 19:27:42 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:16.716 19:27:42 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:16.716 19:27:42 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:16.716 19:27:42 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:16.716 19:27:42 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:16.716 19:27:42 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:16.716 19:27:42 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:16.716 19:27:42 -- scripts/common.sh@15 -- # local i 00:13:16.716 19:27:42 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:13:16.716 19:27:42 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:16.716 19:27:42 -- scripts/common.sh@24 -- # return 0 00:13:16.716 19:27:42 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:16.716 19:27:42 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:16.716 19:27:42 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@18 -- # shift 00:13:16.716 19:27:42 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.716 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.716 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.716 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.717 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.717 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.717 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:16.718 19:27:42 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.718 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.718 19:27:42 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:16.718 19:27:42 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:16.718 19:27:42 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:16.719 19:27:42 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:16.719 19:27:42 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@18 -- # shift 00:13:16.719 19:27:42 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.719 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.719 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:16.719 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:16.720 19:27:42 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:16.720 19:27:42 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:16.720 19:27:42 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:16.720 19:27:42 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:16.720 19:27:42 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:16.720 19:27:42 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:16.720 19:27:42 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:16.720 19:27:42 -- scripts/common.sh@15 -- # local i 00:13:16.720 19:27:42 -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:13:16.720 19:27:42 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:16.720 19:27:42 -- scripts/common.sh@24 -- # return 0 00:13:16.720 19:27:42 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:16.720 19:27:42 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:16.720 19:27:42 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@18 -- # shift 00:13:16.720 19:27:42 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.720 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:16.720 19:27:42 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.720 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.721 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:16.721 19:27:42 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:16.721 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.722 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:16.722 19:27:42 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:16.722 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:16.723 19:27:42 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:16.723 19:27:42 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:16.723 19:27:42 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:16.723 19:27:42 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@18 -- # shift 00:13:16.723 19:27:42 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:16.723 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.723 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.723 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:16.724 19:27:42 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:16.724 19:27:42 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:16.724 19:27:42 -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:16.724 19:27:42 -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@18 -- # shift 00:13:16.724 19:27:42 -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.724 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.724 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:16.724 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:16.725 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.725 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.725 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:16.726 19:27:42 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:13:16.726 19:27:42 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:16.726 19:27:42 -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:16.726 19:27:42 -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@18 -- # shift 00:13:16.726 19:27:42 -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.726 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:16.726 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.726 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.989 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:16.989 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.989 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:16.990 19:27:42 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:16.990 19:27:42 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:16.990 19:27:42 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:16.990 19:27:42 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:16.990 19:27:42 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:16.990 19:27:42 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:16.990 19:27:42 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:16.990 19:27:42 -- scripts/common.sh@15 -- # local i 00:13:16.990 19:27:42 -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:13:16.990 19:27:42 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:13:16.990 19:27:42 -- scripts/common.sh@24 -- # return 0 00:13:16.990 19:27:42 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:16.990 19:27:42 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:16.990 19:27:42 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@18 -- # shift 00:13:16.990 19:27:42 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.990 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.990 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:16.990 19:27:42 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.991 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:16.991 19:27:42 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.991 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.992 19:27:42 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:16.992 19:27:42 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:16.992 19:27:42 -- nvme/functions.sh@21 -- # IFS=: 00:13:16.993 19:27:42 -- nvme/functions.sh@21 -- # read -r reg val 00:13:16.993 19:27:42 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:16.993 19:27:42 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:16.993 19:27:42 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:16.993 19:27:42 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:16.993 19:27:42 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:16.993 19:27:42 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:16.993 19:27:42 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:16.993 19:27:42 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:13:16.993 19:27:42 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:16.993 19:27:42 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:13:16.993 19:27:42 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:13:16.993 19:27:42 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:13:16.993 19:27:42 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:13:16.993 19:27:42 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:13:16.993 19:27:42 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:16.993 19:27:42 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:13:16.993 19:27:42 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:13:16.993 19:27:42 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:13:16.993 19:27:42 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:13:16.993 19:27:42 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:16.993 19:27:42 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:16.993 19:27:42 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:16.993 19:27:42 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:16.993 19:27:42 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:16.993 19:27:42 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:16.993 19:27:42 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:16.993 19:27:42 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:16.993 19:27:42 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:16.993 19:27:42 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:13:16.993 19:27:42 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:13:16.993 19:27:42 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:13:16.993 19:27:42 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:13:16.993 19:27:42 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:16.993 19:27:42 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:16.993 19:27:42 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:16.993 19:27:42 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:16.993 19:27:42 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:16.993 19:27:42 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:16.993 19:27:42 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:16.993 19:27:42 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:16.993 19:27:42 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:16.993 19:27:42 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:13:16.993 19:27:42 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:13:16.993 19:27:42 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:13:16.993 19:27:42 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:13:16.993 19:27:42 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:16.993 19:27:42 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:16.993 19:27:42 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:16.993 19:27:42 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:16.993 19:27:42 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:16.993 19:27:42 -- nvme/functions.sh@76 -- # echo 0x88010 00:13:16.993 19:27:42 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:13:16.993 19:27:42 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:16.993 19:27:42 -- nvme/functions.sh@197 -- # echo nvme3 00:13:16.993 19:27:42 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:13:16.993 19:27:42 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:13:16.993 19:27:42 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:13:16.993 19:27:42 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:13:16.993 19:27:42 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:13:16.993 19:27:42 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:16.993 19:27:42 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:16.993 19:27:42 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:16.993 19:27:42 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:16.993 19:27:42 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:16.993 19:27:42 -- nvme/functions.sh@76 -- # echo 0x8000 00:13:16.993 19:27:42 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:13:16.993 19:27:42 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:13:16.993 19:27:42 -- nvme/functions.sh@204 -- # trap - ERR 00:13:16.993 19:27:42 -- nvme/functions.sh@204 -- # print_backtrace 00:13:16.993 19:27:42 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:13:16.993 19:27:42 -- common/autotest_common.sh@1139 -- # return 0 00:13:16.993 19:27:42 -- nvme/functions.sh@204 -- # trap - ERR 00:13:16.993 19:27:42 -- nvme/functions.sh@204 -- # print_backtrace 00:13:16.993 19:27:42 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:13:16.993 19:27:42 -- common/autotest_common.sh@1139 -- # return 0 00:13:16.993 19:27:42 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:13:16.993 19:27:42 -- nvme/functions.sh@206 -- # echo nvme3 00:13:16.993 19:27:42 -- nvme/functions.sh@207 -- # return 0 00:13:16.993 19:27:42 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:16.993 19:27:42 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:13.0 00:13:16.993 19:27:42 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:17.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:18.510 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:18.510 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:18.510 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:18.510 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:18.510 19:27:44 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:18.510 19:27:44 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:13:18.510 19:27:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:18.510 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:13:18.510 ************************************ 00:13:18.510 START TEST nvme_flexible_data_placement 00:13:18.510 ************************************ 00:13:18.510 19:27:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:18.769 Initializing NVMe Controllers 00:13:18.769 Attaching to 0000:00:13.0 00:13:18.769 Controller supports FDP Attached to 0000:00:13.0 00:13:18.769 Namespace ID: 1 Endurance Group ID: 1 00:13:18.769 Initialization complete. 00:13:18.769 00:13:18.769 ================================== 00:13:18.769 == FDP tests for Namespace: #01 == 00:13:18.769 ================================== 00:13:18.769 00:13:18.769 Get Feature: FDP: 00:13:18.769 ================= 00:13:18.769 Enabled: Yes 00:13:18.769 FDP configuration Index: 0 00:13:18.769 00:13:18.769 FDP configurations log page 00:13:18.769 =========================== 00:13:18.769 Number of FDP configurations: 1 00:13:18.769 Version: 0 00:13:18.769 Size: 112 00:13:18.769 FDP Configuration Descriptor: 0 00:13:18.769 Descriptor Size: 96 00:13:18.769 Reclaim Group Identifier format: 2 00:13:18.769 FDP Volatile Write Cache: Not Present 00:13:18.769 FDP Configuration: Valid 00:13:18.769 Vendor Specific Size: 0 00:13:18.769 Number of Reclaim Groups: 2 00:13:18.769 Number of Recalim Unit Handles: 8 00:13:18.769 Max Placement Identifiers: 128 00:13:18.769 Number of Namespaces Suppprted: 256 00:13:18.769 Reclaim unit Nominal Size: 6000000 bytes 00:13:18.769 Estimated Reclaim Unit Time Limit: Not Reported 00:13:18.769 RUH Desc #000: RUH Type: Initially Isolated 00:13:18.769 RUH Desc #001: RUH Type: Initially Isolated 00:13:18.769 RUH Desc #002: RUH Type: Initially Isolated 00:13:18.769 RUH Desc #003: RUH Type: Initially Isolated 00:13:18.769 RUH Desc #004: RUH Type: Initially Isolated 00:13:18.769 RUH Desc #005: RUH Type: Initially Isolated 00:13:18.769 RUH Desc #006: RUH Type: Initially Isolated 00:13:18.769 RUH Desc #007: RUH Type: Initially Isolated 00:13:18.769 00:13:18.769 FDP reclaim unit handle usage log page 00:13:18.769 ====================================== 00:13:18.769 Number of Reclaim Unit Handles: 8 00:13:18.769 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:18.769 RUH Usage Desc #001: RUH Attributes: Unused 00:13:18.769 RUH Usage Desc #002: RUH Attributes: Unused 00:13:18.769 RUH Usage Desc #003: RUH Attributes: Unused 00:13:18.769 RUH Usage Desc #004: RUH Attributes: Unused 00:13:18.769 RUH Usage Desc #005: RUH Attributes: Unused 00:13:18.769 RUH Usage Desc #006: RUH Attributes: Unused 00:13:18.769 RUH Usage Desc #007: RUH Attributes: Unused 00:13:18.769 00:13:18.769 FDP statistics log page 00:13:18.769 ======================= 00:13:18.769 Host bytes with metadata written: 807362560 00:13:18.769 Media bytes with metadata written: 807522304 00:13:18.769 Media bytes erased: 0 00:13:18.769 00:13:18.769 FDP Reclaim unit handle status 00:13:18.769 ============================== 00:13:18.769 Number of RUHS descriptors: 2 00:13:18.769 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005e0a 00:13:18.769 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:18.769 00:13:18.769 FDP write on placement id: 0 success 00:13:18.769 00:13:18.770 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:18.770 00:13:18.770 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:18.770 00:13:18.770 Get Feature: FDP Events for Placement handle: #0 00:13:18.770 ======================== 00:13:18.770 Number of FDP Events: 6 00:13:18.770 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:18.770 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:18.770 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:18.770 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:18.770 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:18.770 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:18.770 00:13:18.770 FDP events log page 00:13:18.770 =================== 00:13:18.770 Number of FDP events: 1 00:13:18.770 FDP Event #0: 00:13:18.770 Event Type: RU Not Written to Capacity 00:13:18.770 Placement Identifier: Valid 00:13:18.770 NSID: Valid 00:13:18.770 Location: Valid 00:13:18.770 Placement Identifier: 0 00:13:18.770 Event Timestamp: e 00:13:18.770 Namespace Identifier: 1 00:13:18.770 Reclaim Group Identifier: 0 00:13:18.770 Reclaim Unit Handle Identifier: 0 00:13:18.770 00:13:18.770 FDP test passed 00:13:18.770 ************************************ 00:13:18.770 END TEST nvme_flexible_data_placement 00:13:18.770 ************************************ 00:13:18.770 00:13:18.770 real 0m0.281s 00:13:18.770 user 0m0.097s 00:13:18.770 sys 0m0.082s 00:13:18.770 19:27:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:18.770 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:13:18.770 ************************************ 00:13:18.770 END TEST nvme_fdp 00:13:18.770 ************************************ 00:13:18.770 00:13:18.770 real 0m8.679s 00:13:18.770 user 0m1.465s 00:13:18.770 sys 0m2.245s 00:13:18.770 19:27:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:18.770 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:13:19.029 19:27:44 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:13:19.029 19:27:44 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:19.029 19:27:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:19.029 19:27:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:19.029 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:13:19.029 ************************************ 00:13:19.029 START TEST nvme_rpc 00:13:19.029 ************************************ 00:13:19.029 19:27:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:19.029 * Looking for test storage... 00:13:19.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:19.288 19:27:44 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:19.288 19:27:44 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:19.288 19:27:44 -- common/autotest_common.sh@1510 -- # bdfs=() 00:13:19.288 19:27:44 -- common/autotest_common.sh@1510 -- # local bdfs 00:13:19.288 19:27:44 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:13:19.288 19:27:44 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:13:19.288 19:27:44 -- common/autotest_common.sh@1499 -- # bdfs=() 00:13:19.288 19:27:44 -- common/autotest_common.sh@1499 -- # local bdfs 00:13:19.288 19:27:44 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:19.289 19:27:44 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:19.289 19:27:44 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:13:19.289 19:27:44 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:13:19.289 19:27:44 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:19.289 19:27:44 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:13:19.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.289 19:27:44 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:13:19.289 19:27:44 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=73098 00:13:19.289 19:27:44 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:19.289 19:27:44 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:19.289 19:27:44 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 73098 00:13:19.289 19:27:44 -- common/autotest_common.sh@817 -- # '[' -z 73098 ']' 00:13:19.289 19:27:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.289 19:27:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:19.289 19:27:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.289 19:27:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:19.289 19:27:44 -- common/autotest_common.sh@10 -- # set +x 00:13:19.289 [2024-04-24 19:27:44.917702] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:13:19.289 [2024-04-24 19:27:44.917896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73098 ] 00:13:19.548 [2024-04-24 19:27:45.076929] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:19.806 [2024-04-24 19:27:45.339388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.806 [2024-04-24 19:27:45.339424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.743 19:27:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:20.743 19:27:46 -- common/autotest_common.sh@850 -- # return 0 00:13:20.743 19:27:46 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:21.002 Nvme0n1 00:13:21.002 19:27:46 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:21.002 19:27:46 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:21.261 request: 00:13:21.261 { 00:13:21.261 "filename": "non_existing_file", 00:13:21.261 "bdev_name": "Nvme0n1", 00:13:21.261 "method": "bdev_nvme_apply_firmware", 00:13:21.261 "req_id": 1 00:13:21.261 } 00:13:21.261 Got JSON-RPC error response 00:13:21.261 response: 00:13:21.261 { 00:13:21.261 "code": -32603, 00:13:21.261 "message": "open file failed." 00:13:21.261 } 00:13:21.261 19:27:46 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:21.261 19:27:46 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:21.261 19:27:46 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:21.520 19:27:47 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:21.520 19:27:47 -- nvme/nvme_rpc.sh@40 -- # killprocess 73098 00:13:21.520 19:27:47 -- common/autotest_common.sh@936 -- # '[' -z 73098 ']' 00:13:21.520 19:27:47 -- common/autotest_common.sh@940 -- # kill -0 73098 00:13:21.520 19:27:47 -- common/autotest_common.sh@941 -- # uname 00:13:21.520 19:27:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:21.520 19:27:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73098 00:13:21.520 19:27:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:21.520 19:27:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:21.520 19:27:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73098' 00:13:21.520 killing process with pid 73098 00:13:21.520 19:27:47 -- common/autotest_common.sh@955 -- # kill 73098 00:13:21.520 19:27:47 -- common/autotest_common.sh@960 -- # wait 73098 00:13:24.809 00:13:24.809 real 0m5.150s 00:13:24.809 user 0m9.369s 00:13:24.809 sys 0m0.698s 00:13:24.809 ************************************ 00:13:24.809 END TEST nvme_rpc 00:13:24.809 ************************************ 00:13:24.809 19:27:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:24.809 19:27:49 -- common/autotest_common.sh@10 -- # set +x 00:13:24.809 19:27:49 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:24.809 19:27:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:24.809 19:27:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:24.809 19:27:49 -- common/autotest_common.sh@10 -- # set +x 00:13:24.809 ************************************ 00:13:24.809 START TEST nvme_rpc_timeouts 00:13:24.809 ************************************ 00:13:24.809 19:27:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:24.809 * Looking for test storage... 00:13:24.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:24.809 19:27:50 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:24.809 19:27:50 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_73186 00:13:24.809 19:27:50 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_73186 00:13:24.809 19:27:50 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=73214 00:13:24.809 19:27:50 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:24.809 19:27:50 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:24.809 19:27:50 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 73214 00:13:24.809 19:27:50 -- common/autotest_common.sh@817 -- # '[' -z 73214 ']' 00:13:24.809 19:27:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.809 19:27:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:24.809 19:27:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.809 19:27:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:24.809 19:27:50 -- common/autotest_common.sh@10 -- # set +x 00:13:24.809 [2024-04-24 19:27:50.091284] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:13:24.809 [2024-04-24 19:27:50.091517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73214 ] 00:13:24.809 [2024-04-24 19:27:50.260306] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:25.104 [2024-04-24 19:27:50.520231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.104 [2024-04-24 19:27:50.520270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.063 19:27:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:26.063 Checking default timeout settings: 00:13:26.063 19:27:51 -- common/autotest_common.sh@850 -- # return 0 00:13:26.063 19:27:51 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:26.063 19:27:51 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:26.321 Making settings changes with rpc: 00:13:26.321 19:27:51 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:26.321 19:27:51 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:26.579 Check default vs. modified settings: 00:13:26.579 19:27:52 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:26.579 19:27:52 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:26.837 19:27:52 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:26.837 19:27:52 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:26.837 19:27:52 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_73186 00:13:26.837 19:27:52 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:26.837 19:27:52 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:26.837 19:27:52 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_73186 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:27.097 Setting action_on_timeout is changed as expected. 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_73186 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_73186 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:27.097 Setting timeout_us is changed as expected. 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_73186 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_73186 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:27.097 Setting timeout_admin_us is changed as expected. 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_73186 /tmp/settings_modified_73186 00:13:27.097 19:27:52 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 73214 00:13:27.097 19:27:52 -- common/autotest_common.sh@936 -- # '[' -z 73214 ']' 00:13:27.097 19:27:52 -- common/autotest_common.sh@940 -- # kill -0 73214 00:13:27.097 19:27:52 -- common/autotest_common.sh@941 -- # uname 00:13:27.097 19:27:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:27.097 19:27:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73214 00:13:27.097 19:27:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:27.097 19:27:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:27.097 19:27:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73214' 00:13:27.097 killing process with pid 73214 00:13:27.097 19:27:52 -- common/autotest_common.sh@955 -- # kill 73214 00:13:27.097 19:27:52 -- common/autotest_common.sh@960 -- # wait 73214 00:13:30.409 19:27:55 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:30.409 RPC TIMEOUT SETTING TEST PASSED. 00:13:30.409 00:13:30.409 real 0m5.544s 00:13:30.409 user 0m10.362s 00:13:30.409 sys 0m0.690s 00:13:30.409 ************************************ 00:13:30.409 END TEST nvme_rpc_timeouts 00:13:30.409 ************************************ 00:13:30.409 19:27:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:30.409 19:27:55 -- common/autotest_common.sh@10 -- # set +x 00:13:30.409 19:27:55 -- spdk/autotest.sh@241 -- # '[' 1 -eq 0 ']' 00:13:30.409 19:27:55 -- spdk/autotest.sh@245 -- # [[ 1 -eq 1 ]] 00:13:30.409 19:27:55 -- spdk/autotest.sh@246 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:30.409 19:27:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:30.409 19:27:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:30.409 19:27:55 -- common/autotest_common.sh@10 -- # set +x 00:13:30.409 ************************************ 00:13:30.409 START TEST nvme_xnvme 00:13:30.409 ************************************ 00:13:30.409 19:27:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:30.409 * Looking for test storage... 00:13:30.409 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:30.409 19:27:55 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:30.409 19:27:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.409 19:27:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.409 19:27:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.409 19:27:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.409 19:27:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.409 19:27:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.409 19:27:55 -- paths/export.sh@5 -- # export PATH 00:13:30.409 19:27:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:30.409 19:27:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:30.409 19:27:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:30.409 19:27:55 -- common/autotest_common.sh@10 -- # set +x 00:13:30.409 ************************************ 00:13:30.409 START TEST xnvme_to_malloc_dd_copy 00:13:30.409 ************************************ 00:13:30.409 19:27:55 -- common/autotest_common.sh@1111 -- # malloc_to_xnvme_copy 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:30.409 19:27:55 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:30.409 19:27:55 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:30.409 19:27:55 -- dd/common.sh@191 -- # return 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@18 -- # local io 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:30.409 19:27:55 -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:30.409 19:27:55 -- dd/common.sh@31 -- # xtrace_disable 00:13:30.409 19:27:55 -- common/autotest_common.sh@10 -- # set +x 00:13:30.409 { 00:13:30.409 "subsystems": [ 00:13:30.409 { 00:13:30.409 "subsystem": "bdev", 00:13:30.409 "config": [ 00:13:30.409 { 00:13:30.409 "params": { 00:13:30.409 "block_size": 512, 00:13:30.409 "num_blocks": 2097152, 00:13:30.409 "name": "malloc0" 00:13:30.410 }, 00:13:30.410 "method": "bdev_malloc_create" 00:13:30.410 }, 00:13:30.410 { 00:13:30.410 "params": { 00:13:30.410 "io_mechanism": "libaio", 00:13:30.410 "filename": "/dev/nullb0", 00:13:30.410 "name": "null0" 00:13:30.410 }, 00:13:30.410 "method": "bdev_xnvme_create" 00:13:30.410 }, 00:13:30.410 { 00:13:30.410 "method": "bdev_wait_for_examine" 00:13:30.410 } 00:13:30.410 ] 00:13:30.410 } 00:13:30.410 ] 00:13:30.410 } 00:13:30.410 [2024-04-24 19:27:55.906965] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:13:30.410 [2024-04-24 19:27:55.907168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73378 ] 00:13:30.410 [2024-04-24 19:27:56.073691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.669 [2024-04-24 19:27:56.330473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:42.390  Copying: 236/1024 [MB] (236 MBps) Copying: 467/1024 [MB] (231 MBps) Copying: 690/1024 [MB] (222 MBps) Copying: 922/1024 [MB] (231 MBps) Copying: 1024/1024 [MB] (average 231 MBps) 00:13:42.390 00:13:42.390 19:28:07 -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:42.390 19:28:07 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:42.390 19:28:07 -- dd/common.sh@31 -- # xtrace_disable 00:13:42.390 19:28:07 -- common/autotest_common.sh@10 -- # set +x 00:13:42.390 { 00:13:42.390 "subsystems": [ 00:13:42.390 { 00:13:42.390 "subsystem": "bdev", 00:13:42.390 "config": [ 00:13:42.390 { 00:13:42.390 "params": { 00:13:42.390 "block_size": 512, 00:13:42.390 "num_blocks": 2097152, 00:13:42.390 "name": "malloc0" 00:13:42.390 }, 00:13:42.390 "method": "bdev_malloc_create" 00:13:42.390 }, 00:13:42.390 { 00:13:42.390 "params": { 00:13:42.390 "io_mechanism": "libaio", 00:13:42.390 "filename": "/dev/nullb0", 00:13:42.390 "name": "null0" 00:13:42.390 }, 00:13:42.390 "method": "bdev_xnvme_create" 00:13:42.390 }, 00:13:42.390 { 00:13:42.390 "method": "bdev_wait_for_examine" 00:13:42.390 } 00:13:42.390 ] 00:13:42.390 } 00:13:42.390 ] 00:13:42.390 } 00:13:42.390 [2024-04-24 19:28:07.264517] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:13:42.390 [2024-04-24 19:28:07.264736] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73504 ] 00:13:42.390 [2024-04-24 19:28:07.432676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:42.390 [2024-04-24 19:28:07.702983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.710  Copying: 224/1024 [MB] (224 MBps) Copying: 448/1024 [MB] (224 MBps) Copying: 676/1024 [MB] (228 MBps) Copying: 907/1024 [MB] (230 MBps) Copying: 1024/1024 [MB] (average 226 MBps) 00:13:53.710 00:13:53.710 19:28:18 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:53.710 19:28:18 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:53.710 19:28:18 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:53.710 19:28:18 -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:53.710 19:28:18 -- dd/common.sh@31 -- # xtrace_disable 00:13:53.710 19:28:18 -- common/autotest_common.sh@10 -- # set +x 00:13:53.710 { 00:13:53.710 "subsystems": [ 00:13:53.710 { 00:13:53.710 "subsystem": "bdev", 00:13:53.710 "config": [ 00:13:53.710 { 00:13:53.710 "params": { 00:13:53.710 "block_size": 512, 00:13:53.710 "num_blocks": 2097152, 00:13:53.710 "name": "malloc0" 00:13:53.710 }, 00:13:53.710 "method": "bdev_malloc_create" 00:13:53.710 }, 00:13:53.710 { 00:13:53.710 "params": { 00:13:53.710 "io_mechanism": "io_uring", 00:13:53.710 "filename": "/dev/nullb0", 00:13:53.710 "name": "null0" 00:13:53.710 }, 00:13:53.710 "method": "bdev_xnvme_create" 00:13:53.710 }, 00:13:53.710 { 00:13:53.710 "method": "bdev_wait_for_examine" 00:13:53.710 } 00:13:53.710 ] 00:13:53.710 } 00:13:53.710 ] 00:13:53.710 } 00:13:53.710 [2024-04-24 19:28:18.891726] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:13:53.710 [2024-04-24 19:28:18.891856] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73630 ] 00:13:53.710 [2024-04-24 19:28:19.060137] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.710 [2024-04-24 19:28:19.334505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.466  Copying: 217/1024 [MB] (217 MBps) Copying: 454/1024 [MB] (237 MBps) Copying: 693/1024 [MB] (239 MBps) Copying: 926/1024 [MB] (232 MBps) Copying: 1024/1024 [MB] (average 232 MBps) 00:14:05.466 00:14:05.466 19:28:30 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:05.466 19:28:30 -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:05.466 19:28:30 -- dd/common.sh@31 -- # xtrace_disable 00:14:05.467 19:28:30 -- common/autotest_common.sh@10 -- # set +x 00:14:05.467 { 00:14:05.467 "subsystems": [ 00:14:05.467 { 00:14:05.467 "subsystem": "bdev", 00:14:05.467 "config": [ 00:14:05.467 { 00:14:05.467 "params": { 00:14:05.467 "block_size": 512, 00:14:05.467 "num_blocks": 2097152, 00:14:05.467 "name": "malloc0" 00:14:05.467 }, 00:14:05.467 "method": "bdev_malloc_create" 00:14:05.467 }, 00:14:05.467 { 00:14:05.467 "params": { 00:14:05.467 "io_mechanism": "io_uring", 00:14:05.467 "filename": "/dev/nullb0", 00:14:05.467 "name": "null0" 00:14:05.467 }, 00:14:05.467 "method": "bdev_xnvme_create" 00:14:05.467 }, 00:14:05.467 { 00:14:05.467 "method": "bdev_wait_for_examine" 00:14:05.467 } 00:14:05.467 ] 00:14:05.467 } 00:14:05.467 ] 00:14:05.467 } 00:14:05.467 [2024-04-24 19:28:30.362701] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:05.467 [2024-04-24 19:28:30.362898] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73762 ] 00:14:05.467 [2024-04-24 19:28:30.531006] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.467 [2024-04-24 19:28:30.796857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:16.555  Copying: 242/1024 [MB] (242 MBps) Copying: 503/1024 [MB] (261 MBps) Copying: 752/1024 [MB] (248 MBps) Copying: 991/1024 [MB] (239 MBps) Copying: 1024/1024 [MB] (average 248 MBps) 00:14:16.555 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:16.555 19:28:41 -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:16.555 00:14:16.555 real 0m45.895s 00:14:16.555 user 0m41.392s 00:14:16.555 sys 0m3.955s 00:14:16.555 ************************************ 00:14:16.555 END TEST xnvme_to_malloc_dd_copy 00:14:16.555 ************************************ 00:14:16.555 19:28:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:16.555 19:28:41 -- common/autotest_common.sh@10 -- # set +x 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:16.555 19:28:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:16.555 19:28:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:16.555 19:28:41 -- common/autotest_common.sh@10 -- # set +x 00:14:16.555 ************************************ 00:14:16.555 START TEST xnvme_bdevperf 00:14:16.555 ************************************ 00:14:16.555 19:28:41 -- common/autotest_common.sh@1111 -- # xnvme_bdevperf 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:16.555 19:28:41 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:14:16.555 19:28:41 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:14:16.555 19:28:41 -- dd/common.sh@191 -- # return 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@60 -- # local io 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:16.555 19:28:41 -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:16.555 19:28:41 -- dd/common.sh@31 -- # xtrace_disable 00:14:16.555 19:28:41 -- common/autotest_common.sh@10 -- # set +x 00:14:16.555 { 00:14:16.555 "subsystems": [ 00:14:16.555 { 00:14:16.555 "subsystem": "bdev", 00:14:16.555 "config": [ 00:14:16.555 { 00:14:16.555 "params": { 00:14:16.555 "io_mechanism": "libaio", 00:14:16.555 "filename": "/dev/nullb0", 00:14:16.555 "name": "null0" 00:14:16.555 }, 00:14:16.555 "method": "bdev_xnvme_create" 00:14:16.555 }, 00:14:16.555 { 00:14:16.555 "method": "bdev_wait_for_examine" 00:14:16.555 } 00:14:16.555 ] 00:14:16.555 } 00:14:16.555 ] 00:14:16.555 } 00:14:16.555 [2024-04-24 19:28:41.922935] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:16.555 [2024-04-24 19:28:41.923067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73915 ] 00:14:16.555 [2024-04-24 19:28:42.077856] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.815 [2024-04-24 19:28:42.357118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.383 Running I/O for 5 seconds... 00:14:22.693 00:14:22.693 Latency(us) 00:14:22.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:22.693 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:22.693 null0 : 5.00 146322.73 571.57 0.00 0.00 434.41 146.67 651.07 00:14:22.693 =================================================================================================================== 00:14:22.693 Total : 146322.73 571.57 0.00 0.00 434.41 146.67 651.07 00:14:24.067 19:28:49 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:24.068 19:28:49 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:24.068 19:28:49 -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:24.068 19:28:49 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:24.068 19:28:49 -- dd/common.sh@31 -- # xtrace_disable 00:14:24.068 19:28:49 -- common/autotest_common.sh@10 -- # set +x 00:14:24.068 { 00:14:24.068 "subsystems": [ 00:14:24.068 { 00:14:24.068 "subsystem": "bdev", 00:14:24.068 "config": [ 00:14:24.068 { 00:14:24.068 "params": { 00:14:24.068 "io_mechanism": "io_uring", 00:14:24.068 "filename": "/dev/nullb0", 00:14:24.068 "name": "null0" 00:14:24.068 }, 00:14:24.068 "method": "bdev_xnvme_create" 00:14:24.068 }, 00:14:24.068 { 00:14:24.068 "method": "bdev_wait_for_examine" 00:14:24.068 } 00:14:24.068 ] 00:14:24.068 } 00:14:24.068 ] 00:14:24.068 } 00:14:24.068 [2024-04-24 19:28:49.469119] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:24.068 [2024-04-24 19:28:49.469247] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74000 ] 00:14:24.068 [2024-04-24 19:28:49.638329] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.328 [2024-04-24 19:28:49.929388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.909 Running I/O for 5 seconds... 00:14:30.202 00:14:30.202 Latency(us) 00:14:30.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.202 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:30.202 null0 : 5.00 183295.10 716.00 0.00 0.00 346.37 200.33 958.71 00:14:30.202 =================================================================================================================== 00:14:30.202 Total : 183295.10 716.00 0.00 0.00 346.37 200.33 958.71 00:14:31.583 19:28:56 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:31.583 19:28:56 -- dd/common.sh@195 -- # modprobe -r null_blk 00:14:31.583 00:14:31.583 real 0m15.129s 00:14:31.583 user 0m12.384s 00:14:31.583 sys 0m2.505s 00:14:31.583 19:28:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.583 19:28:56 -- common/autotest_common.sh@10 -- # set +x 00:14:31.583 ************************************ 00:14:31.583 END TEST xnvme_bdevperf 00:14:31.583 ************************************ 00:14:31.583 ************************************ 00:14:31.583 END TEST nvme_xnvme 00:14:31.583 ************************************ 00:14:31.583 00:14:31.583 real 1m1.412s 00:14:31.583 user 0m53.911s 00:14:31.583 sys 0m6.694s 00:14:31.583 19:28:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:31.583 19:28:56 -- common/autotest_common.sh@10 -- # set +x 00:14:31.583 19:28:57 -- spdk/autotest.sh@247 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:31.583 19:28:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:31.583 19:28:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:31.583 19:28:57 -- common/autotest_common.sh@10 -- # set +x 00:14:31.583 ************************************ 00:14:31.583 START TEST blockdev_xnvme 00:14:31.583 ************************************ 00:14:31.583 19:28:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:31.583 * Looking for test storage... 00:14:31.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:31.583 19:28:57 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:31.583 19:28:57 -- bdev/nbd_common.sh@6 -- # set -e 00:14:31.583 19:28:57 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:31.583 19:28:57 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:31.583 19:28:57 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:31.583 19:28:57 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:31.583 19:28:57 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:31.583 19:28:57 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:31.583 19:28:57 -- bdev/blockdev.sh@20 -- # : 00:14:31.583 19:28:57 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:14:31.583 19:28:57 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:14:31.583 19:28:57 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:14:31.583 19:28:57 -- bdev/blockdev.sh@674 -- # uname -s 00:14:31.583 19:28:57 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:14:31.583 19:28:57 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:14:31.583 19:28:57 -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:14:31.583 19:28:57 -- bdev/blockdev.sh@683 -- # crypto_device= 00:14:31.583 19:28:57 -- bdev/blockdev.sh@684 -- # dek= 00:14:31.583 19:28:57 -- bdev/blockdev.sh@685 -- # env_ctx= 00:14:31.583 19:28:57 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:14:31.583 19:28:57 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:14:31.583 19:28:57 -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:14:31.583 19:28:57 -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:14:31.583 19:28:57 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:14:31.583 19:28:57 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74156 00:14:31.583 19:28:57 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:31.583 19:28:57 -- bdev/blockdev.sh@49 -- # waitforlisten 74156 00:14:31.583 19:28:57 -- common/autotest_common.sh@817 -- # '[' -z 74156 ']' 00:14:31.583 19:28:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.583 19:28:57 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:31.583 19:28:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:31.583 19:28:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.583 19:28:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:31.583 19:28:57 -- common/autotest_common.sh@10 -- # set +x 00:14:31.843 [2024-04-24 19:28:57.327371] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:31.843 [2024-04-24 19:28:57.327582] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74156 ] 00:14:31.843 [2024-04-24 19:28:57.497447] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.412 [2024-04-24 19:28:57.781934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.372 19:28:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:33.372 19:28:58 -- common/autotest_common.sh@850 -- # return 0 00:14:33.372 19:28:58 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:14:33.372 19:28:58 -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:14:33.372 19:28:58 -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:33.372 19:28:58 -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:33.372 19:28:58 -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:33.631 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:33.631 Waiting for block devices as requested 00:14:33.631 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:33.890 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:33.890 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:33.890 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:39.160 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:39.160 19:29:04 -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:39.160 19:29:04 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:39.160 19:29:04 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:39.160 19:29:04 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:39.160 19:29:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:39.160 19:29:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:39.160 19:29:04 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:39.160 19:29:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:39.160 19:29:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:39.161 19:29:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:39.161 19:29:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:39.161 19:29:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:39.161 19:29:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:39.161 19:29:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:39.161 19:29:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:39.161 19:29:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:14:39.161 19:29:04 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:14:39.161 19:29:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:39.161 19:29:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:39.161 19:29:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:39.161 19:29:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:14:39.161 19:29:04 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:14:39.161 19:29:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:39.161 19:29:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:39.161 19:29:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:39.161 19:29:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:14:39.161 19:29:04 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:14:39.161 19:29:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:39.161 19:29:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:39.161 19:29:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:39.161 19:29:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:14:39.161 19:29:04 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:14:39.161 19:29:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:39.161 19:29:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:39.161 19:29:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:39.161 19:29:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:14:39.161 19:29:04 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:14:39.161 19:29:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:39.161 19:29:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:39.161 19:29:04 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:39.161 19:29:04 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:39.161 19:29:04 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:39.161 19:29:04 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:39.161 19:29:04 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:39.161 19:29:04 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:39.161 19:29:04 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:39.161 19:29:04 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:39.161 19:29:04 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:39.161 19:29:04 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:39.161 19:29:04 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:39.161 19:29:04 -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:39.161 19:29:04 -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:39.161 19:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.161 19:29:04 -- common/autotest_common.sh@10 -- # set +x 00:14:39.161 19:29:04 -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:39.161 nvme0n1 00:14:39.161 nvme1n1 00:14:39.161 nvme2n1 00:14:39.161 nvme2n2 00:14:39.161 nvme2n3 00:14:39.161 nvme3n1 00:14:39.161 19:29:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:14:39.161 19:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.161 19:29:04 -- common/autotest_common.sh@10 -- # set +x 00:14:39.161 19:29:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@740 -- # cat 00:14:39.161 19:29:04 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:14:39.161 19:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.161 19:29:04 -- common/autotest_common.sh@10 -- # set +x 00:14:39.161 19:29:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:14:39.161 19:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.161 19:29:04 -- common/autotest_common.sh@10 -- # set +x 00:14:39.161 19:29:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:39.161 19:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.161 19:29:04 -- common/autotest_common.sh@10 -- # set +x 00:14:39.161 19:29:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:14:39.161 19:29:04 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:14:39.161 19:29:04 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:14:39.161 19:29:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:39.161 19:29:04 -- common/autotest_common.sh@10 -- # set +x 00:14:39.161 19:29:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:39.161 19:29:04 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:14:39.161 19:29:04 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "595c45ca-6b93-4909-92db-06b86d8af04b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "595c45ca-6b93-4909-92db-06b86d8af04b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "7682f712-0688-4bfb-9416-7826344b2422"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7682f712-0688-4bfb-9416-7826344b2422",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "30eefc08-646a-4529-a2cf-44bc3a8a8f81"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "30eefc08-646a-4529-a2cf-44bc3a8a8f81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "e2b78ebc-1416-4317-aa40-d72044dc35ab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e2b78ebc-1416-4317-aa40-d72044dc35ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "cd630103-c6d9-4e18-adc2-f90cf1ca6006"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd630103-c6d9-4e18-adc2-f90cf1ca6006",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "29f6b104-d895-469d-92ce-019609b6a315"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "29f6b104-d895-469d-92ce-019609b6a315",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:14:39.161 19:29:04 -- bdev/blockdev.sh@749 -- # jq -r .name 00:14:39.161 19:29:04 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:14:39.161 19:29:04 -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:14:39.161 19:29:04 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:14:39.161 19:29:04 -- bdev/blockdev.sh@754 -- # killprocess 74156 00:14:39.161 19:29:04 -- common/autotest_common.sh@936 -- # '[' -z 74156 ']' 00:14:39.161 19:29:04 -- common/autotest_common.sh@940 -- # kill -0 74156 00:14:39.420 19:29:04 -- common/autotest_common.sh@941 -- # uname 00:14:39.420 19:29:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:39.420 19:29:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74156 00:14:39.420 killing process with pid 74156 00:14:39.420 19:29:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:39.420 19:29:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:39.420 19:29:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74156' 00:14:39.420 19:29:04 -- common/autotest_common.sh@955 -- # kill 74156 00:14:39.420 19:29:04 -- common/autotest_common.sh@960 -- # wait 74156 00:14:42.707 19:29:07 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:42.707 19:29:07 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:42.707 19:29:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:14:42.707 19:29:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:42.707 19:29:07 -- common/autotest_common.sh@10 -- # set +x 00:14:42.707 ************************************ 00:14:42.707 START TEST bdev_hello_world 00:14:42.708 ************************************ 00:14:42.708 19:29:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:42.708 [2024-04-24 19:29:07.833136] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:42.708 [2024-04-24 19:29:07.833265] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74534 ] 00:14:42.708 [2024-04-24 19:29:07.991768] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.708 [2024-04-24 19:29:08.238132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.272 [2024-04-24 19:29:08.755851] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:43.272 [2024-04-24 19:29:08.755910] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:43.272 [2024-04-24 19:29:08.755936] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:43.272 [2024-04-24 19:29:08.757933] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:43.272 [2024-04-24 19:29:08.758370] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:43.272 [2024-04-24 19:29:08.758400] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:43.272 [2024-04-24 19:29:08.758599] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:43.272 00:14:43.272 [2024-04-24 19:29:08.758652] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:44.644 00:14:44.644 real 0m2.435s 00:14:44.644 user 0m2.094s 00:14:44.644 sys 0m0.222s 00:14:44.644 19:29:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:44.644 ************************************ 00:14:44.644 END TEST bdev_hello_world 00:14:44.644 ************************************ 00:14:44.644 19:29:10 -- common/autotest_common.sh@10 -- # set +x 00:14:44.644 19:29:10 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:14:44.644 19:29:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:44.644 19:29:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:44.644 19:29:10 -- common/autotest_common.sh@10 -- # set +x 00:14:44.903 ************************************ 00:14:44.903 START TEST bdev_bounds 00:14:44.903 ************************************ 00:14:44.903 19:29:10 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:14:44.903 19:29:10 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:44.903 19:29:10 -- bdev/blockdev.sh@290 -- # bdevio_pid=74586 00:14:44.903 19:29:10 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:44.903 Process bdevio pid: 74586 00:14:44.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.903 19:29:10 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 74586' 00:14:44.903 19:29:10 -- bdev/blockdev.sh@293 -- # waitforlisten 74586 00:14:44.903 19:29:10 -- common/autotest_common.sh@817 -- # '[' -z 74586 ']' 00:14:44.903 19:29:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.903 19:29:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:44.903 19:29:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.903 19:29:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:44.903 19:29:10 -- common/autotest_common.sh@10 -- # set +x 00:14:44.903 [2024-04-24 19:29:10.447770] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:44.903 [2024-04-24 19:29:10.448001] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74586 ] 00:14:45.160 [2024-04-24 19:29:10.630508] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:45.418 [2024-04-24 19:29:10.977483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.418 [2024-04-24 19:29:10.977587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.418 [2024-04-24 19:29:10.977614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.983 19:29:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:45.983 19:29:11 -- common/autotest_common.sh@850 -- # return 0 00:14:45.983 19:29:11 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:46.240 I/O targets: 00:14:46.240 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:46.240 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:46.240 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:46.240 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:46.240 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:46.240 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:46.240 00:14:46.240 00:14:46.240 CUnit - A unit testing framework for C - Version 2.1-3 00:14:46.240 http://cunit.sourceforge.net/ 00:14:46.240 00:14:46.240 00:14:46.240 Suite: bdevio tests on: nvme3n1 00:14:46.240 Test: blockdev write read block ...passed 00:14:46.240 Test: blockdev write zeroes read block ...passed 00:14:46.240 Test: blockdev write zeroes read no split ...passed 00:14:46.240 Test: blockdev write zeroes read split ...passed 00:14:46.240 Test: blockdev write zeroes read split partial ...passed 00:14:46.240 Test: blockdev reset ...passed 00:14:46.240 Test: blockdev write read 8 blocks ...passed 00:14:46.240 Test: blockdev write read size > 128k ...passed 00:14:46.240 Test: blockdev write read invalid size ...passed 00:14:46.240 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:46.240 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:46.240 Test: blockdev write read max offset ...passed 00:14:46.240 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:46.240 Test: blockdev writev readv 8 blocks ...passed 00:14:46.240 Test: blockdev writev readv 30 x 1block ...passed 00:14:46.240 Test: blockdev writev readv block ...passed 00:14:46.240 Test: blockdev writev readv size > 128k ...passed 00:14:46.240 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:46.240 Test: blockdev comparev and writev ...passed 00:14:46.240 Test: blockdev nvme passthru rw ...passed 00:14:46.240 Test: blockdev nvme passthru vendor specific ...passed 00:14:46.240 Test: blockdev nvme admin passthru ...passed 00:14:46.240 Test: blockdev copy ...passed 00:14:46.240 Suite: bdevio tests on: nvme2n3 00:14:46.240 Test: blockdev write read block ...passed 00:14:46.240 Test: blockdev write zeroes read block ...passed 00:14:46.240 Test: blockdev write zeroes read no split ...passed 00:14:46.240 Test: blockdev write zeroes read split ...passed 00:14:46.240 Test: blockdev write zeroes read split partial ...passed 00:14:46.240 Test: blockdev reset ...passed 00:14:46.240 Test: blockdev write read 8 blocks ...passed 00:14:46.240 Test: blockdev write read size > 128k ...passed 00:14:46.240 Test: blockdev write read invalid size ...passed 00:14:46.240 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:46.240 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:46.240 Test: blockdev write read max offset ...passed 00:14:46.240 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:46.240 Test: blockdev writev readv 8 blocks ...passed 00:14:46.240 Test: blockdev writev readv 30 x 1block ...passed 00:14:46.240 Test: blockdev writev readv block ...passed 00:14:46.240 Test: blockdev writev readv size > 128k ...passed 00:14:46.240 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:46.240 Test: blockdev comparev and writev ...passed 00:14:46.240 Test: blockdev nvme passthru rw ...passed 00:14:46.240 Test: blockdev nvme passthru vendor specific ...passed 00:14:46.240 Test: blockdev nvme admin passthru ...passed 00:14:46.240 Test: blockdev copy ...passed 00:14:46.240 Suite: bdevio tests on: nvme2n2 00:14:46.240 Test: blockdev write read block ...passed 00:14:46.240 Test: blockdev write zeroes read block ...passed 00:14:46.240 Test: blockdev write zeroes read no split ...passed 00:14:46.499 Test: blockdev write zeroes read split ...passed 00:14:46.500 Test: blockdev write zeroes read split partial ...passed 00:14:46.500 Test: blockdev reset ...passed 00:14:46.500 Test: blockdev write read 8 blocks ...passed 00:14:46.500 Test: blockdev write read size > 128k ...passed 00:14:46.500 Test: blockdev write read invalid size ...passed 00:14:46.500 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:46.500 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:46.500 Test: blockdev write read max offset ...passed 00:14:46.500 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:46.500 Test: blockdev writev readv 8 blocks ...passed 00:14:46.500 Test: blockdev writev readv 30 x 1block ...passed 00:14:46.500 Test: blockdev writev readv block ...passed 00:14:46.500 Test: blockdev writev readv size > 128k ...passed 00:14:46.500 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:46.500 Test: blockdev comparev and writev ...passed 00:14:46.500 Test: blockdev nvme passthru rw ...passed 00:14:46.500 Test: blockdev nvme passthru vendor specific ...passed 00:14:46.500 Test: blockdev nvme admin passthru ...passed 00:14:46.500 Test: blockdev copy ...passed 00:14:46.500 Suite: bdevio tests on: nvme2n1 00:14:46.500 Test: blockdev write read block ...passed 00:14:46.500 Test: blockdev write zeroes read block ...passed 00:14:46.500 Test: blockdev write zeroes read no split ...passed 00:14:46.500 Test: blockdev write zeroes read split ...passed 00:14:46.500 Test: blockdev write zeroes read split partial ...passed 00:14:46.500 Test: blockdev reset ...passed 00:14:46.500 Test: blockdev write read 8 blocks ...passed 00:14:46.500 Test: blockdev write read size > 128k ...passed 00:14:46.500 Test: blockdev write read invalid size ...passed 00:14:46.500 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:46.500 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:46.500 Test: blockdev write read max offset ...passed 00:14:46.500 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:46.500 Test: blockdev writev readv 8 blocks ...passed 00:14:46.500 Test: blockdev writev readv 30 x 1block ...passed 00:14:46.500 Test: blockdev writev readv block ...passed 00:14:46.500 Test: blockdev writev readv size > 128k ...passed 00:14:46.500 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:46.500 Test: blockdev comparev and writev ...passed 00:14:46.500 Test: blockdev nvme passthru rw ...passed 00:14:46.500 Test: blockdev nvme passthru vendor specific ...passed 00:14:46.500 Test: blockdev nvme admin passthru ...passed 00:14:46.500 Test: blockdev copy ...passed 00:14:46.500 Suite: bdevio tests on: nvme1n1 00:14:46.500 Test: blockdev write read block ...passed 00:14:46.500 Test: blockdev write zeroes read block ...passed 00:14:46.500 Test: blockdev write zeroes read no split ...passed 00:14:46.500 Test: blockdev write zeroes read split ...passed 00:14:46.758 Test: blockdev write zeroes read split partial ...passed 00:14:46.758 Test: blockdev reset ...passed 00:14:46.758 Test: blockdev write read 8 blocks ...passed 00:14:46.758 Test: blockdev write read size > 128k ...passed 00:14:46.758 Test: blockdev write read invalid size ...passed 00:14:46.758 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:46.758 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:46.758 Test: blockdev write read max offset ...passed 00:14:46.758 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:46.758 Test: blockdev writev readv 8 blocks ...passed 00:14:46.758 Test: blockdev writev readv 30 x 1block ...passed 00:14:46.758 Test: blockdev writev readv block ...passed 00:14:46.758 Test: blockdev writev readv size > 128k ...passed 00:14:46.758 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:46.758 Test: blockdev comparev and writev ...passed 00:14:46.758 Test: blockdev nvme passthru rw ...passed 00:14:46.758 Test: blockdev nvme passthru vendor specific ...passed 00:14:46.758 Test: blockdev nvme admin passthru ...passed 00:14:46.758 Test: blockdev copy ...passed 00:14:46.758 Suite: bdevio tests on: nvme0n1 00:14:46.758 Test: blockdev write read block ...passed 00:14:46.758 Test: blockdev write zeroes read block ...passed 00:14:46.758 Test: blockdev write zeroes read no split ...passed 00:14:46.758 Test: blockdev write zeroes read split ...passed 00:14:46.759 Test: blockdev write zeroes read split partial ...passed 00:14:46.759 Test: blockdev reset ...passed 00:14:46.759 Test: blockdev write read 8 blocks ...passed 00:14:46.759 Test: blockdev write read size > 128k ...passed 00:14:46.759 Test: blockdev write read invalid size ...passed 00:14:46.759 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:46.759 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:46.759 Test: blockdev write read max offset ...passed 00:14:46.759 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:46.759 Test: blockdev writev readv 8 blocks ...passed 00:14:46.759 Test: blockdev writev readv 30 x 1block ...passed 00:14:46.759 Test: blockdev writev readv block ...passed 00:14:46.759 Test: blockdev writev readv size > 128k ...passed 00:14:46.759 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:46.759 Test: blockdev comparev and writev ...passed 00:14:46.759 Test: blockdev nvme passthru rw ...passed 00:14:46.759 Test: blockdev nvme passthru vendor specific ...passed 00:14:46.759 Test: blockdev nvme admin passthru ...passed 00:14:46.759 Test: blockdev copy ...passed 00:14:46.759 00:14:46.759 Run Summary: Type Total Ran Passed Failed Inactive 00:14:46.759 suites 6 6 n/a 0 0 00:14:46.759 tests 138 138 138 0 0 00:14:46.759 asserts 780 780 780 0 n/a 00:14:46.759 00:14:46.759 Elapsed time = 1.787 seconds 00:14:46.759 0 00:14:46.759 19:29:12 -- bdev/blockdev.sh@295 -- # killprocess 74586 00:14:46.759 19:29:12 -- common/autotest_common.sh@936 -- # '[' -z 74586 ']' 00:14:46.759 19:29:12 -- common/autotest_common.sh@940 -- # kill -0 74586 00:14:46.759 19:29:12 -- common/autotest_common.sh@941 -- # uname 00:14:46.759 19:29:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:46.759 19:29:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74586 00:14:46.759 killing process with pid 74586 00:14:46.759 19:29:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:46.759 19:29:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:46.759 19:29:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74586' 00:14:46.759 19:29:12 -- common/autotest_common.sh@955 -- # kill 74586 00:14:46.759 19:29:12 -- common/autotest_common.sh@960 -- # wait 74586 00:14:48.660 ************************************ 00:14:48.660 END TEST bdev_bounds 00:14:48.660 ************************************ 00:14:48.660 19:29:13 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:14:48.660 00:14:48.660 real 0m3.546s 00:14:48.660 user 0m8.116s 00:14:48.660 sys 0m0.466s 00:14:48.660 19:29:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:48.660 19:29:13 -- common/autotest_common.sh@10 -- # set +x 00:14:48.660 19:29:13 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:48.660 19:29:13 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:14:48.660 19:29:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:48.660 19:29:13 -- common/autotest_common.sh@10 -- # set +x 00:14:48.660 ************************************ 00:14:48.660 START TEST bdev_nbd 00:14:48.660 ************************************ 00:14:48.660 19:29:13 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:48.660 19:29:13 -- bdev/blockdev.sh@300 -- # uname -s 00:14:48.660 19:29:14 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:14:48.660 19:29:14 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:48.660 19:29:14 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:48.660 19:29:14 -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:48.660 19:29:14 -- bdev/blockdev.sh@304 -- # local bdev_all 00:14:48.660 19:29:14 -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:14:48.660 19:29:14 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:14:48.660 19:29:14 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:48.660 19:29:14 -- bdev/blockdev.sh@311 -- # local nbd_all 00:14:48.660 19:29:14 -- bdev/blockdev.sh@312 -- # bdev_num=6 00:14:48.660 19:29:14 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:48.660 19:29:14 -- bdev/blockdev.sh@314 -- # local nbd_list 00:14:48.660 19:29:14 -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:48.660 19:29:14 -- bdev/blockdev.sh@315 -- # local bdev_list 00:14:48.660 19:29:14 -- bdev/blockdev.sh@318 -- # nbd_pid=74667 00:14:48.660 19:29:14 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:48.660 19:29:14 -- bdev/blockdev.sh@320 -- # waitforlisten 74667 /var/tmp/spdk-nbd.sock 00:14:48.660 19:29:14 -- common/autotest_common.sh@817 -- # '[' -z 74667 ']' 00:14:48.660 19:29:14 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:48.660 19:29:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:48.660 19:29:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:48.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:48.660 19:29:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:48.660 19:29:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:48.660 19:29:14 -- common/autotest_common.sh@10 -- # set +x 00:14:48.660 [2024-04-24 19:29:14.120362] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:14:48.660 [2024-04-24 19:29:14.120531] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.660 [2024-04-24 19:29:14.277891] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.226 [2024-04-24 19:29:14.654723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.793 19:29:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:49.793 19:29:15 -- common/autotest_common.sh@850 -- # return 0 00:14:49.793 19:29:15 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:49.793 19:29:15 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:49.793 19:29:15 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:49.793 19:29:15 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:49.793 19:29:15 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:49.793 19:29:15 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:49.793 19:29:15 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:49.793 19:29:15 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:49.793 19:29:15 -- bdev/nbd_common.sh@24 -- # local i 00:14:49.793 19:29:15 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:49.793 19:29:15 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:49.793 19:29:15 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:49.793 19:29:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:50.052 19:29:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:50.052 19:29:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:50.052 19:29:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:50.052 19:29:15 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:14:50.052 19:29:15 -- common/autotest_common.sh@855 -- # local i 00:14:50.052 19:29:15 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:50.052 19:29:15 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:50.052 19:29:15 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:14:50.052 19:29:15 -- common/autotest_common.sh@859 -- # break 00:14:50.052 19:29:15 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:50.052 19:29:15 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:50.052 19:29:15 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.052 1+0 records in 00:14:50.052 1+0 records out 00:14:50.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787829 s, 5.2 MB/s 00:14:50.052 19:29:15 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.052 19:29:15 -- common/autotest_common.sh@872 -- # size=4096 00:14:50.052 19:29:15 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.052 19:29:15 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:50.052 19:29:15 -- common/autotest_common.sh@875 -- # return 0 00:14:50.052 19:29:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:50.052 19:29:15 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:50.052 19:29:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:50.311 19:29:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:50.311 19:29:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:50.311 19:29:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:50.311 19:29:15 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:14:50.311 19:29:15 -- common/autotest_common.sh@855 -- # local i 00:14:50.311 19:29:15 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:50.311 19:29:15 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:50.311 19:29:15 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:14:50.311 19:29:15 -- common/autotest_common.sh@859 -- # break 00:14:50.311 19:29:15 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:50.311 19:29:15 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:50.311 19:29:15 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.311 1+0 records in 00:14:50.311 1+0 records out 00:14:50.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000913388 s, 4.5 MB/s 00:14:50.311 19:29:15 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.311 19:29:15 -- common/autotest_common.sh@872 -- # size=4096 00:14:50.311 19:29:15 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.311 19:29:15 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:50.311 19:29:15 -- common/autotest_common.sh@875 -- # return 0 00:14:50.311 19:29:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:50.311 19:29:15 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:50.311 19:29:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:50.571 19:29:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:50.571 19:29:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:50.571 19:29:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:50.571 19:29:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:14:50.571 19:29:16 -- common/autotest_common.sh@855 -- # local i 00:14:50.571 19:29:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:50.571 19:29:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:50.571 19:29:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:14:50.571 19:29:16 -- common/autotest_common.sh@859 -- # break 00:14:50.571 19:29:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:50.571 19:29:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:50.571 19:29:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:50.571 1+0 records in 00:14:50.571 1+0 records out 00:14:50.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000899514 s, 4.6 MB/s 00:14:50.571 19:29:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.571 19:29:16 -- common/autotest_common.sh@872 -- # size=4096 00:14:50.571 19:29:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:50.571 19:29:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:50.571 19:29:16 -- common/autotest_common.sh@875 -- # return 0 00:14:50.571 19:29:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:50.571 19:29:16 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:50.571 19:29:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:51.139 19:29:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:51.139 19:29:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:51.139 19:29:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:51.139 19:29:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:14:51.139 19:29:16 -- common/autotest_common.sh@855 -- # local i 00:14:51.139 19:29:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:51.139 19:29:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:51.139 19:29:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:14:51.139 19:29:16 -- common/autotest_common.sh@859 -- # break 00:14:51.139 19:29:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:51.139 19:29:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:51.139 19:29:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.139 1+0 records in 00:14:51.139 1+0 records out 00:14:51.139 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532024 s, 7.7 MB/s 00:14:51.139 19:29:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.139 19:29:16 -- common/autotest_common.sh@872 -- # size=4096 00:14:51.139 19:29:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.139 19:29:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:51.139 19:29:16 -- common/autotest_common.sh@875 -- # return 0 00:14:51.139 19:29:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:51.139 19:29:16 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:51.139 19:29:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:51.398 19:29:16 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:51.398 19:29:16 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:51.398 19:29:16 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:51.398 19:29:16 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:14:51.398 19:29:16 -- common/autotest_common.sh@855 -- # local i 00:14:51.398 19:29:16 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:51.398 19:29:16 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:51.398 19:29:16 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:14:51.398 19:29:16 -- common/autotest_common.sh@859 -- # break 00:14:51.398 19:29:16 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:51.398 19:29:16 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:51.398 19:29:16 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.398 1+0 records in 00:14:51.398 1+0 records out 00:14:51.398 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619456 s, 6.6 MB/s 00:14:51.398 19:29:16 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.398 19:29:16 -- common/autotest_common.sh@872 -- # size=4096 00:14:51.398 19:29:16 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.398 19:29:16 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:51.398 19:29:16 -- common/autotest_common.sh@875 -- # return 0 00:14:51.398 19:29:16 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:51.398 19:29:16 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:51.398 19:29:16 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:51.656 19:29:17 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:51.656 19:29:17 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:51.656 19:29:17 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:51.656 19:29:17 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:14:51.656 19:29:17 -- common/autotest_common.sh@855 -- # local i 00:14:51.656 19:29:17 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:51.656 19:29:17 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:51.656 19:29:17 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:14:51.656 19:29:17 -- common/autotest_common.sh@859 -- # break 00:14:51.656 19:29:17 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:51.656 19:29:17 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:51.656 19:29:17 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.656 1+0 records in 00:14:51.656 1+0 records out 00:14:51.656 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000946309 s, 4.3 MB/s 00:14:51.656 19:29:17 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.656 19:29:17 -- common/autotest_common.sh@872 -- # size=4096 00:14:51.656 19:29:17 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.656 19:29:17 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:51.656 19:29:17 -- common/autotest_common.sh@875 -- # return 0 00:14:51.657 19:29:17 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:51.657 19:29:17 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:51.657 19:29:17 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:51.915 19:29:17 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:51.915 { 00:14:51.915 "nbd_device": "/dev/nbd0", 00:14:51.915 "bdev_name": "nvme0n1" 00:14:51.915 }, 00:14:51.915 { 00:14:51.915 "nbd_device": "/dev/nbd1", 00:14:51.915 "bdev_name": "nvme1n1" 00:14:51.915 }, 00:14:51.915 { 00:14:51.915 "nbd_device": "/dev/nbd2", 00:14:51.915 "bdev_name": "nvme2n1" 00:14:51.915 }, 00:14:51.915 { 00:14:51.915 "nbd_device": "/dev/nbd3", 00:14:51.915 "bdev_name": "nvme2n2" 00:14:51.915 }, 00:14:51.915 { 00:14:51.915 "nbd_device": "/dev/nbd4", 00:14:51.915 "bdev_name": "nvme2n3" 00:14:51.915 }, 00:14:51.915 { 00:14:51.915 "nbd_device": "/dev/nbd5", 00:14:51.915 "bdev_name": "nvme3n1" 00:14:51.915 } 00:14:51.915 ]' 00:14:51.915 19:29:17 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:51.915 19:29:17 -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:51.915 { 00:14:51.915 "nbd_device": "/dev/nbd0", 00:14:51.915 "bdev_name": "nvme0n1" 00:14:51.915 }, 00:14:51.915 { 00:14:51.915 "nbd_device": "/dev/nbd1", 00:14:51.915 "bdev_name": "nvme1n1" 00:14:51.915 }, 00:14:51.915 { 00:14:51.915 "nbd_device": "/dev/nbd2", 00:14:51.915 "bdev_name": "nvme2n1" 00:14:51.915 }, 00:14:51.915 { 00:14:51.915 "nbd_device": "/dev/nbd3", 00:14:51.915 "bdev_name": "nvme2n2" 00:14:51.915 }, 00:14:51.915 { 00:14:51.915 "nbd_device": "/dev/nbd4", 00:14:51.915 "bdev_name": "nvme2n3" 00:14:51.915 }, 00:14:51.915 { 00:14:51.915 "nbd_device": "/dev/nbd5", 00:14:51.915 "bdev_name": "nvme3n1" 00:14:51.915 } 00:14:51.915 ]' 00:14:51.915 19:29:17 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:51.915 19:29:17 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:51.915 19:29:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:51.915 19:29:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:51.915 19:29:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:51.915 19:29:17 -- bdev/nbd_common.sh@51 -- # local i 00:14:51.915 19:29:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.915 19:29:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:52.252 19:29:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:52.252 19:29:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:52.252 19:29:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:52.252 19:29:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:52.252 19:29:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:52.252 19:29:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:52.252 19:29:17 -- bdev/nbd_common.sh@41 -- # break 00:14:52.252 19:29:17 -- bdev/nbd_common.sh@45 -- # return 0 00:14:52.252 19:29:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:52.252 19:29:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:52.510 19:29:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:52.510 19:29:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:52.510 19:29:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:52.510 19:29:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:52.510 19:29:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:52.510 19:29:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:52.510 19:29:18 -- bdev/nbd_common.sh@41 -- # break 00:14:52.511 19:29:18 -- bdev/nbd_common.sh@45 -- # return 0 00:14:52.511 19:29:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:52.511 19:29:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:52.770 19:29:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:52.770 19:29:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:52.770 19:29:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:52.770 19:29:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:52.770 19:29:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:52.770 19:29:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:52.770 19:29:18 -- bdev/nbd_common.sh@41 -- # break 00:14:52.770 19:29:18 -- bdev/nbd_common.sh@45 -- # return 0 00:14:52.770 19:29:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:52.770 19:29:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:53.029 19:29:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:53.029 19:29:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:53.029 19:29:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:53.029 19:29:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.029 19:29:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.029 19:29:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:53.029 19:29:18 -- bdev/nbd_common.sh@41 -- # break 00:14:53.029 19:29:18 -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.029 19:29:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.029 19:29:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:53.290 19:29:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:53.290 19:29:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:53.290 19:29:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:53.290 19:29:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.290 19:29:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.290 19:29:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:53.290 19:29:18 -- bdev/nbd_common.sh@41 -- # break 00:14:53.290 19:29:18 -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.290 19:29:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.290 19:29:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:53.549 19:29:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:53.549 19:29:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:53.549 19:29:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:53.549 19:29:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.549 19:29:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.549 19:29:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:53.549 19:29:19 -- bdev/nbd_common.sh@41 -- # break 00:14:53.549 19:29:19 -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.549 19:29:19 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:53.549 19:29:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:53.549 19:29:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@65 -- # true 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@65 -- # count=0 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@122 -- # count=0 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@127 -- # return 0 00:14:53.808 19:29:19 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@12 -- # local i 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:53.808 19:29:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:54.066 /dev/nbd0 00:14:54.066 19:29:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:54.066 19:29:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:54.066 19:29:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:14:54.066 19:29:19 -- common/autotest_common.sh@855 -- # local i 00:14:54.066 19:29:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:54.066 19:29:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:54.066 19:29:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:14:54.066 19:29:19 -- common/autotest_common.sh@859 -- # break 00:14:54.066 19:29:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:54.066 19:29:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:54.066 19:29:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.325 1+0 records in 00:14:54.325 1+0 records out 00:14:54.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551869 s, 7.4 MB/s 00:14:54.325 19:29:19 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.325 19:29:19 -- common/autotest_common.sh@872 -- # size=4096 00:14:54.325 19:29:19 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.325 19:29:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:54.325 19:29:19 -- common/autotest_common.sh@875 -- # return 0 00:14:54.325 19:29:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.325 19:29:19 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:54.325 19:29:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:54.325 /dev/nbd1 00:14:54.583 19:29:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:54.583 19:29:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:54.583 19:29:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:14:54.583 19:29:20 -- common/autotest_common.sh@855 -- # local i 00:14:54.583 19:29:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:54.583 19:29:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:54.583 19:29:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:14:54.583 19:29:20 -- common/autotest_common.sh@859 -- # break 00:14:54.583 19:29:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:54.583 19:29:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:54.584 19:29:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.584 1+0 records in 00:14:54.584 1+0 records out 00:14:54.584 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00083543 s, 4.9 MB/s 00:14:54.584 19:29:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.584 19:29:20 -- common/autotest_common.sh@872 -- # size=4096 00:14:54.584 19:29:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.584 19:29:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:54.584 19:29:20 -- common/autotest_common.sh@875 -- # return 0 00:14:54.584 19:29:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.584 19:29:20 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:54.584 19:29:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:54.842 /dev/nbd10 00:14:54.842 19:29:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:54.842 19:29:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:54.842 19:29:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:14:54.842 19:29:20 -- common/autotest_common.sh@855 -- # local i 00:14:54.842 19:29:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:54.842 19:29:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:54.842 19:29:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:14:54.842 19:29:20 -- common/autotest_common.sh@859 -- # break 00:14:54.842 19:29:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:54.842 19:29:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:54.842 19:29:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:54.842 1+0 records in 00:14:54.842 1+0 records out 00:14:54.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000598281 s, 6.8 MB/s 00:14:54.842 19:29:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.842 19:29:20 -- common/autotest_common.sh@872 -- # size=4096 00:14:54.842 19:29:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:54.842 19:29:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:54.842 19:29:20 -- common/autotest_common.sh@875 -- # return 0 00:14:54.842 19:29:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:54.842 19:29:20 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:54.842 19:29:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:14:55.101 /dev/nbd11 00:14:55.101 19:29:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:55.101 19:29:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:55.101 19:29:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:14:55.101 19:29:20 -- common/autotest_common.sh@855 -- # local i 00:14:55.101 19:29:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:55.101 19:29:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:55.101 19:29:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:14:55.101 19:29:20 -- common/autotest_common.sh@859 -- # break 00:14:55.101 19:29:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:55.101 19:29:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:55.101 19:29:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.101 1+0 records in 00:14:55.101 1+0 records out 00:14:55.101 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000640121 s, 6.4 MB/s 00:14:55.101 19:29:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.101 19:29:20 -- common/autotest_common.sh@872 -- # size=4096 00:14:55.101 19:29:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.101 19:29:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:55.101 19:29:20 -- common/autotest_common.sh@875 -- # return 0 00:14:55.101 19:29:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.101 19:29:20 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:55.101 19:29:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:14:55.359 /dev/nbd12 00:14:55.359 19:29:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:55.359 19:29:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:55.359 19:29:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:14:55.359 19:29:20 -- common/autotest_common.sh@855 -- # local i 00:14:55.359 19:29:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:55.359 19:29:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:55.359 19:29:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:14:55.359 19:29:20 -- common/autotest_common.sh@859 -- # break 00:14:55.359 19:29:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:55.359 19:29:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:55.359 19:29:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.359 1+0 records in 00:14:55.359 1+0 records out 00:14:55.359 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731737 s, 5.6 MB/s 00:14:55.359 19:29:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.359 19:29:20 -- common/autotest_common.sh@872 -- # size=4096 00:14:55.359 19:29:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.359 19:29:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:55.359 19:29:20 -- common/autotest_common.sh@875 -- # return 0 00:14:55.359 19:29:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.359 19:29:20 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:55.359 19:29:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:55.624 /dev/nbd13 00:14:55.624 19:29:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:55.624 19:29:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:55.624 19:29:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:14:55.624 19:29:21 -- common/autotest_common.sh@855 -- # local i 00:14:55.624 19:29:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:14:55.624 19:29:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:14:55.624 19:29:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:14:55.624 19:29:21 -- common/autotest_common.sh@859 -- # break 00:14:55.624 19:29:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:14:55.624 19:29:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:14:55.624 19:29:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.624 1+0 records in 00:14:55.624 1+0 records out 00:14:55.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000866047 s, 4.7 MB/s 00:14:55.624 19:29:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.624 19:29:21 -- common/autotest_common.sh@872 -- # size=4096 00:14:55.624 19:29:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:55.624 19:29:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:14:55.624 19:29:21 -- common/autotest_common.sh@875 -- # return 0 00:14:55.624 19:29:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.624 19:29:21 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:55.624 19:29:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:55.624 19:29:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:55.624 19:29:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:55.884 { 00:14:55.884 "nbd_device": "/dev/nbd0", 00:14:55.884 "bdev_name": "nvme0n1" 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "nbd_device": "/dev/nbd1", 00:14:55.884 "bdev_name": "nvme1n1" 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "nbd_device": "/dev/nbd10", 00:14:55.884 "bdev_name": "nvme2n1" 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "nbd_device": "/dev/nbd11", 00:14:55.884 "bdev_name": "nvme2n2" 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "nbd_device": "/dev/nbd12", 00:14:55.884 "bdev_name": "nvme2n3" 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "nbd_device": "/dev/nbd13", 00:14:55.884 "bdev_name": "nvme3n1" 00:14:55.884 } 00:14:55.884 ]' 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:55.884 { 00:14:55.884 "nbd_device": "/dev/nbd0", 00:14:55.884 "bdev_name": "nvme0n1" 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "nbd_device": "/dev/nbd1", 00:14:55.884 "bdev_name": "nvme1n1" 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "nbd_device": "/dev/nbd10", 00:14:55.884 "bdev_name": "nvme2n1" 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "nbd_device": "/dev/nbd11", 00:14:55.884 "bdev_name": "nvme2n2" 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "nbd_device": "/dev/nbd12", 00:14:55.884 "bdev_name": "nvme2n3" 00:14:55.884 }, 00:14:55.884 { 00:14:55.884 "nbd_device": "/dev/nbd13", 00:14:55.884 "bdev_name": "nvme3n1" 00:14:55.884 } 00:14:55.884 ]' 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:55.884 /dev/nbd1 00:14:55.884 /dev/nbd10 00:14:55.884 /dev/nbd11 00:14:55.884 /dev/nbd12 00:14:55.884 /dev/nbd13' 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:55.884 /dev/nbd1 00:14:55.884 /dev/nbd10 00:14:55.884 /dev/nbd11 00:14:55.884 /dev/nbd12 00:14:55.884 /dev/nbd13' 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@65 -- # count=6 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@66 -- # echo 6 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@95 -- # count=6 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:55.884 256+0 records in 00:14:55.884 256+0 records out 00:14:55.884 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00637754 s, 164 MB/s 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:55.884 19:29:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:56.144 256+0 records in 00:14:56.144 256+0 records out 00:14:56.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.097016 s, 10.8 MB/s 00:14:56.144 19:29:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:56.144 19:29:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:56.144 256+0 records in 00:14:56.144 256+0 records out 00:14:56.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0916899 s, 11.4 MB/s 00:14:56.144 19:29:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:56.144 19:29:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:56.144 256+0 records in 00:14:56.144 256+0 records out 00:14:56.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0967098 s, 10.8 MB/s 00:14:56.144 19:29:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:56.144 19:29:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:56.403 256+0 records in 00:14:56.403 256+0 records out 00:14:56.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0974308 s, 10.8 MB/s 00:14:56.403 19:29:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:56.403 19:29:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:56.403 256+0 records in 00:14:56.403 256+0 records out 00:14:56.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0977319 s, 10.7 MB/s 00:14:56.403 19:29:21 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:56.403 19:29:21 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:56.661 256+0 records in 00:14:56.661 256+0 records out 00:14:56.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0939673 s, 11.2 MB/s 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@51 -- # local i 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.661 19:29:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:56.920 19:29:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:56.920 19:29:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:56.920 19:29:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:56.920 19:29:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.920 19:29:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.920 19:29:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:56.920 19:29:22 -- bdev/nbd_common.sh@41 -- # break 00:14:56.920 19:29:22 -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.920 19:29:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.920 19:29:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:57.179 19:29:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:57.179 19:29:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:57.179 19:29:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:57.179 19:29:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.179 19:29:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.179 19:29:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:57.179 19:29:22 -- bdev/nbd_common.sh@41 -- # break 00:14:57.179 19:29:22 -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.179 19:29:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.179 19:29:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:57.437 19:29:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:57.437 19:29:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:57.437 19:29:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:57.437 19:29:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.437 19:29:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.437 19:29:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:57.437 19:29:22 -- bdev/nbd_common.sh@41 -- # break 00:14:57.437 19:29:22 -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.437 19:29:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.437 19:29:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@41 -- # break 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@41 -- # break 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@45 -- # return 0 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:57.700 19:29:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@41 -- # break 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@65 -- # true 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@65 -- # count=0 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@104 -- # count=0 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@109 -- # return 0 00:14:58.273 19:29:23 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:58.273 19:29:23 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:58.532 malloc_lvol_verify 00:14:58.532 19:29:24 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:58.790 d583e2b0-6f04-4178-b4f6-2bd653db0d9c 00:14:58.790 19:29:24 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:59.049 623ee102-09f9-4ea2-87cd-33fecc02d168 00:14:59.049 19:29:24 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:59.308 /dev/nbd0 00:14:59.308 19:29:24 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:59.308 mke2fs 1.46.5 (30-Dec-2021) 00:14:59.308 Discarding device blocks: 0/4096 done 00:14:59.308 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:59.308 00:14:59.308 Allocating group tables: 0/1 done 00:14:59.308 Writing inode tables: 0/1 done 00:14:59.308 Creating journal (1024 blocks): done 00:14:59.308 Writing superblocks and filesystem accounting information: 0/1 done 00:14:59.308 00:14:59.308 19:29:24 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:59.308 19:29:24 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:59.308 19:29:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:59.308 19:29:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:59.308 19:29:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.308 19:29:24 -- bdev/nbd_common.sh@51 -- # local i 00:14:59.308 19:29:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.308 19:29:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:59.568 19:29:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:59.568 19:29:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:59.568 19:29:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:59.568 19:29:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.568 19:29:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.568 19:29:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:59.568 19:29:25 -- bdev/nbd_common.sh@41 -- # break 00:14:59.568 19:29:25 -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.568 19:29:25 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:59.568 19:29:25 -- bdev/nbd_common.sh@147 -- # return 0 00:14:59.568 19:29:25 -- bdev/blockdev.sh@326 -- # killprocess 74667 00:14:59.568 19:29:25 -- common/autotest_common.sh@936 -- # '[' -z 74667 ']' 00:14:59.568 19:29:25 -- common/autotest_common.sh@940 -- # kill -0 74667 00:14:59.568 19:29:25 -- common/autotest_common.sh@941 -- # uname 00:14:59.568 19:29:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.568 19:29:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74667 00:14:59.568 killing process with pid 74667 00:14:59.568 19:29:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:59.568 19:29:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:59.568 19:29:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74667' 00:14:59.568 19:29:25 -- common/autotest_common.sh@955 -- # kill 74667 00:14:59.568 19:29:25 -- common/autotest_common.sh@960 -- # wait 74667 00:15:01.517 19:29:26 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:15:01.517 00:15:01.517 real 0m12.881s 00:15:01.517 user 0m17.617s 00:15:01.517 sys 0m4.308s 00:15:01.517 ************************************ 00:15:01.517 END TEST bdev_nbd 00:15:01.517 ************************************ 00:15:01.517 19:29:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:01.517 19:29:26 -- common/autotest_common.sh@10 -- # set +x 00:15:01.517 19:29:26 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:15:01.517 19:29:26 -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:15:01.517 19:29:26 -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:15:01.517 19:29:26 -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:15:01.517 19:29:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:01.517 19:29:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:01.517 19:29:26 -- common/autotest_common.sh@10 -- # set +x 00:15:01.517 ************************************ 00:15:01.517 START TEST bdev_fio 00:15:01.517 ************************************ 00:15:01.517 19:29:27 -- common/autotest_common.sh@1111 -- # fio_test_suite '' 00:15:01.517 19:29:27 -- bdev/blockdev.sh@331 -- # local env_context 00:15:01.517 19:29:27 -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:01.517 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:01.517 19:29:27 -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:01.517 19:29:27 -- bdev/blockdev.sh@339 -- # echo '' 00:15:01.517 19:29:27 -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:15:01.517 19:29:27 -- bdev/blockdev.sh@339 -- # env_context= 00:15:01.517 19:29:27 -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:01.517 19:29:27 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:01.517 19:29:27 -- common/autotest_common.sh@1267 -- # local workload=verify 00:15:01.517 19:29:27 -- common/autotest_common.sh@1268 -- # local bdev_type=AIO 00:15:01.517 19:29:27 -- common/autotest_common.sh@1269 -- # local env_context= 00:15:01.517 19:29:27 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:15:01.517 19:29:27 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:01.517 19:29:27 -- common/autotest_common.sh@1277 -- # '[' -z verify ']' 00:15:01.517 19:29:27 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:15:01.517 19:29:27 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:01.517 19:29:27 -- common/autotest_common.sh@1287 -- # cat 00:15:01.517 19:29:27 -- common/autotest_common.sh@1299 -- # '[' verify == verify ']' 00:15:01.517 19:29:27 -- common/autotest_common.sh@1300 -- # cat 00:15:01.517 19:29:27 -- common/autotest_common.sh@1309 -- # '[' AIO == AIO ']' 00:15:01.517 19:29:27 -- common/autotest_common.sh@1310 -- # /usr/src/fio/fio --version 00:15:01.517 19:29:27 -- common/autotest_common.sh@1310 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:01.517 19:29:27 -- common/autotest_common.sh@1311 -- # echo serialize_overlap=1 00:15:01.517 19:29:27 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:01.517 19:29:27 -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:15:01.517 19:29:27 -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:15:01.517 19:29:27 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:01.517 19:29:27 -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:15:01.517 19:29:27 -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:15:01.517 19:29:27 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:01.517 19:29:27 -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:15:01.517 19:29:27 -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:15:01.517 19:29:27 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:01.517 19:29:27 -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:15:01.517 19:29:27 -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:15:01.517 19:29:27 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:01.517 19:29:27 -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:15:01.517 19:29:27 -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:15:01.517 19:29:27 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:15:01.517 19:29:27 -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:15:01.517 19:29:27 -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:15:01.517 19:29:27 -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:01.517 19:29:27 -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:01.517 19:29:27 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:15:01.517 19:29:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:01.517 19:29:27 -- common/autotest_common.sh@10 -- # set +x 00:15:01.798 ************************************ 00:15:01.798 START TEST bdev_fio_rw_verify 00:15:01.798 ************************************ 00:15:01.798 19:29:27 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:01.798 19:29:27 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:01.798 19:29:27 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:15:01.798 19:29:27 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.798 19:29:27 -- common/autotest_common.sh@1325 -- # local sanitizers 00:15:01.798 19:29:27 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.798 19:29:27 -- common/autotest_common.sh@1327 -- # shift 00:15:01.798 19:29:27 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:15:01.798 19:29:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.798 19:29:27 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.798 19:29:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:15:01.798 19:29:27 -- common/autotest_common.sh@1331 -- # grep libasan 00:15:01.798 19:29:27 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:01.798 19:29:27 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:01.798 19:29:27 -- common/autotest_common.sh@1333 -- # break 00:15:01.798 19:29:27 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:01.798 19:29:27 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:01.798 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:01.798 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:01.798 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:01.798 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:01.798 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:01.798 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:01.798 fio-3.35 00:15:01.798 Starting 6 threads 00:15:14.003 00:15:14.003 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75112: Wed Apr 24 19:29:38 2024 00:15:14.003 read: IOPS=29.4k, BW=115MiB/s (121MB/s)(1150MiB/10001msec) 00:15:14.003 slat (usec): min=2, max=5320, avg= 9.04, stdev=13.84 00:15:14.003 clat (usec): min=98, max=9665, avg=505.53, stdev=308.63 00:15:14.003 lat (usec): min=106, max=9684, avg=514.57, stdev=310.03 00:15:14.003 clat percentiles (usec): 00:15:14.003 | 50.000th=[ 449], 99.000th=[ 1369], 99.900th=[ 4015], 99.990th=[ 5735], 00:15:14.003 | 99.999th=[ 9634] 00:15:14.003 write: IOPS=29.8k, BW=116MiB/s (122MB/s)(1163MiB/10001msec); 0 zone resets 00:15:14.003 slat (usec): min=12, max=9578, avg=42.83, stdev=59.42 00:15:14.003 clat (usec): min=72, max=11101, avg=699.65, stdev=368.81 00:15:14.003 lat (usec): min=86, max=11161, avg=742.48, stdev=379.58 00:15:14.003 clat percentiles (usec): 00:15:14.003 | 50.000th=[ 652], 99.000th=[ 1729], 99.900th=[ 4359], 99.990th=[ 6718], 00:15:14.003 | 99.999th=[10945] 00:15:14.003 bw ( KiB/s): min=94320, max=144999, per=99.98%, avg=119054.42, stdev=2695.24, samples=114 00:15:14.003 iops : min=23578, max=36249, avg=29763.00, stdev=673.83, samples=114 00:15:14.003 lat (usec) : 100=0.01%, 250=9.86%, 500=34.37%, 750=29.06%, 1000=16.51% 00:15:14.003 lat (msec) : 2=9.83%, 4=0.26%, 10=0.11%, 20=0.01% 00:15:14.003 cpu : usr=54.63%, sys=26.01%, ctx=8043, majf=0, minf=25004 00:15:14.003 IO depths : 1=11.6%, 2=23.7%, 4=51.2%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:14.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.003 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.003 issued rwts: total=294367,297716,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.003 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:14.003 00:15:14.003 Run status group 0 (all jobs): 00:15:14.003 READ: bw=115MiB/s (121MB/s), 115MiB/s-115MiB/s (121MB/s-121MB/s), io=1150MiB (1206MB), run=10001-10001msec 00:15:14.003 WRITE: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=1163MiB (1219MB), run=10001-10001msec 00:15:14.263 ----------------------------------------------------- 00:15:14.263 Suppressions used: 00:15:14.263 count bytes template 00:15:14.263 6 48 /usr/src/fio/parse.c 00:15:14.263 3136 301056 /usr/src/fio/iolog.c 00:15:14.263 1 8 libtcmalloc_minimal.so 00:15:14.263 1 904 libcrypto.so 00:15:14.263 ----------------------------------------------------- 00:15:14.263 00:15:14.263 00:15:14.263 real 0m12.704s 00:15:14.263 user 0m35.038s 00:15:14.263 sys 0m15.970s 00:15:14.263 19:29:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:14.263 ************************************ 00:15:14.263 END TEST bdev_fio_rw_verify 00:15:14.263 ************************************ 00:15:14.263 19:29:39 -- common/autotest_common.sh@10 -- # set +x 00:15:14.263 19:29:39 -- bdev/blockdev.sh@350 -- # rm -f 00:15:14.263 19:29:39 -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:14.522 19:29:39 -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:14.522 19:29:39 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:14.522 19:29:39 -- common/autotest_common.sh@1267 -- # local workload=trim 00:15:14.522 19:29:39 -- common/autotest_common.sh@1268 -- # local bdev_type= 00:15:14.522 19:29:39 -- common/autotest_common.sh@1269 -- # local env_context= 00:15:14.522 19:29:39 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:15:14.522 19:29:39 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:14.522 19:29:39 -- common/autotest_common.sh@1277 -- # '[' -z trim ']' 00:15:14.522 19:29:39 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:15:14.522 19:29:39 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:14.522 19:29:39 -- common/autotest_common.sh@1287 -- # cat 00:15:14.522 19:29:39 -- common/autotest_common.sh@1299 -- # '[' trim == verify ']' 00:15:14.522 19:29:39 -- common/autotest_common.sh@1314 -- # '[' trim == trim ']' 00:15:14.522 19:29:39 -- common/autotest_common.sh@1315 -- # echo rw=trimwrite 00:15:14.522 19:29:39 -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:14.522 19:29:39 -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "595c45ca-6b93-4909-92db-06b86d8af04b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "595c45ca-6b93-4909-92db-06b86d8af04b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "7682f712-0688-4bfb-9416-7826344b2422"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7682f712-0688-4bfb-9416-7826344b2422",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "30eefc08-646a-4529-a2cf-44bc3a8a8f81"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "30eefc08-646a-4529-a2cf-44bc3a8a8f81",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "e2b78ebc-1416-4317-aa40-d72044dc35ab"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e2b78ebc-1416-4317-aa40-d72044dc35ab",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "cd630103-c6d9-4e18-adc2-f90cf1ca6006"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd630103-c6d9-4e18-adc2-f90cf1ca6006",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "29f6b104-d895-469d-92ce-019609b6a315"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "29f6b104-d895-469d-92ce-019609b6a315",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:15:14.522 19:29:40 -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:15:14.522 19:29:40 -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:14.522 /home/vagrant/spdk_repo/spdk 00:15:14.522 19:29:40 -- bdev/blockdev.sh@362 -- # popd 00:15:14.522 19:29:40 -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:15:14.522 19:29:40 -- bdev/blockdev.sh@364 -- # return 0 00:15:14.522 00:15:14.522 real 0m12.985s 00:15:14.522 user 0m35.168s 00:15:14.522 sys 0m16.116s 00:15:14.522 19:29:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:14.522 19:29:40 -- common/autotest_common.sh@10 -- # set +x 00:15:14.522 ************************************ 00:15:14.522 END TEST bdev_fio 00:15:14.522 ************************************ 00:15:14.522 19:29:40 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:14.522 19:29:40 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:14.522 19:29:40 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:15:14.522 19:29:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:14.522 19:29:40 -- common/autotest_common.sh@10 -- # set +x 00:15:14.522 ************************************ 00:15:14.522 START TEST bdev_verify 00:15:14.522 ************************************ 00:15:14.523 19:29:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:14.780 [2024-04-24 19:29:40.268996] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:14.781 [2024-04-24 19:29:40.269127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75295 ] 00:15:14.781 [2024-04-24 19:29:40.424355] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:15.039 [2024-04-24 19:29:40.712280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.039 [2024-04-24 19:29:40.712302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.976 Running I/O for 5 seconds... 00:15:21.341 00:15:21.341 Latency(us) 00:15:21.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.341 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:21.341 Verification LBA range: start 0x0 length 0xa0000 00:15:21.341 nvme0n1 : 5.05 1648.70 6.44 0.00 0.00 77498.51 13450.62 71431.38 00:15:21.341 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:21.341 Verification LBA range: start 0xa0000 length 0xa0000 00:15:21.341 nvme0n1 : 5.04 1649.21 6.44 0.00 0.00 77469.42 12019.70 83794.50 00:15:21.341 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:21.341 Verification LBA range: start 0x0 length 0xbd0bd 00:15:21.341 nvme1n1 : 5.06 2603.21 10.17 0.00 0.00 48972.77 6296.03 76926.10 00:15:21.341 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:21.341 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:21.341 nvme1n1 : 5.04 2545.95 9.95 0.00 0.00 50058.03 4893.74 68226.12 00:15:21.341 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:21.341 Verification LBA range: start 0x0 length 0x80000 00:15:21.341 nvme2n1 : 5.05 1671.99 6.53 0.00 0.00 76180.63 4607.55 76468.21 00:15:21.341 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:21.341 Verification LBA range: start 0x80000 length 0x80000 00:15:21.341 nvme2n1 : 5.04 1675.48 6.54 0.00 0.00 75831.98 8585.50 72805.06 00:15:21.341 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:21.341 Verification LBA range: start 0x0 length 0x80000 00:15:21.341 nvme2n2 : 5.06 1645.89 6.43 0.00 0.00 77234.24 14080.22 76926.10 00:15:21.341 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:21.341 Verification LBA range: start 0x80000 length 0x80000 00:15:21.341 nvme2n2 : 5.05 1648.53 6.44 0.00 0.00 76913.01 14309.17 69141.91 00:15:21.341 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:21.341 Verification LBA range: start 0x0 length 0x80000 00:15:21.341 nvme2n3 : 5.07 1665.87 6.51 0.00 0.00 76174.72 4693.41 82420.82 00:15:21.341 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:21.341 Verification LBA range: start 0x80000 length 0x80000 00:15:21.341 nvme2n3 : 5.07 1667.21 6.51 0.00 0.00 75909.51 2361.01 77383.99 00:15:21.342 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:21.342 Verification LBA range: start 0x0 length 0x20000 00:15:21.342 nvme3n1 : 5.07 1665.07 6.50 0.00 0.00 76060.94 3920.71 82878.71 00:15:21.342 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:21.342 Verification LBA range: start 0x20000 length 0x20000 00:15:21.342 nvme3n1 : 5.07 1666.69 6.51 0.00 0.00 75865.29 3219.56 82420.82 00:15:21.342 =================================================================================================================== 00:15:21.342 Total : 21753.78 84.98 0.00 0.00 70122.22 2361.01 83794.50 00:15:22.714 00:15:22.714 real 0m7.806s 00:15:22.714 user 0m12.051s 00:15:22.714 sys 0m1.914s 00:15:22.714 19:29:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:22.714 19:29:47 -- common/autotest_common.sh@10 -- # set +x 00:15:22.714 ************************************ 00:15:22.714 END TEST bdev_verify 00:15:22.714 ************************************ 00:15:22.714 19:29:48 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:22.714 19:29:48 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:15:22.714 19:29:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:22.714 19:29:48 -- common/autotest_common.sh@10 -- # set +x 00:15:22.714 ************************************ 00:15:22.714 START TEST bdev_verify_big_io 00:15:22.714 ************************************ 00:15:22.714 19:29:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:22.714 [2024-04-24 19:29:48.206969] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:22.715 [2024-04-24 19:29:48.207148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75410 ] 00:15:22.715 [2024-04-24 19:29:48.385358] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:23.281 [2024-04-24 19:29:48.653411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.281 [2024-04-24 19:29:48.653444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:23.849 Running I/O for 5 seconds... 00:15:30.411 00:15:30.411 Latency(us) 00:15:30.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.411 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:30.411 Verification LBA range: start 0x0 length 0xa000 00:15:30.411 nvme0n1 : 5.86 106.51 6.66 0.00 0.00 1146085.29 109894.43 1648416.42 00:15:30.411 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:30.411 Verification LBA range: start 0xa000 length 0xa000 00:15:30.411 nvme0n1 : 5.75 194.94 12.18 0.00 0.00 636405.97 38920.94 813218.77 00:15:30.411 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:30.411 Verification LBA range: start 0x0 length 0xbd0b 00:15:30.411 nvme1n1 : 5.86 139.20 8.70 0.00 0.00 853753.91 21749.94 1098944.28 00:15:30.411 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:30.411 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:30.411 nvme1n1 : 5.86 131.08 8.19 0.00 0.00 903060.61 25985.45 1619111.24 00:15:30.411 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:30.411 Verification LBA range: start 0x0 length 0x8000 00:15:30.411 nvme2n1 : 5.88 95.31 5.96 0.00 0.00 1246636.18 119968.08 1875531.57 00:15:30.411 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:30.411 Verification LBA range: start 0x8000 length 0x8000 00:15:30.411 nvme2n1 : 5.85 188.79 11.80 0.00 0.00 619652.61 108978.64 652040.27 00:15:30.411 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:30.411 Verification LBA range: start 0x0 length 0x8000 00:15:30.411 nvme2n2 : 5.87 114.73 7.17 0.00 0.00 1009768.34 96157.62 1128249.46 00:15:30.411 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:30.411 Verification LBA range: start 0x8000 length 0x8000 00:15:30.411 nvme2n2 : 5.87 207.22 12.95 0.00 0.00 563077.10 17857.84 747282.11 00:15:30.411 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:30.411 Verification LBA range: start 0x0 length 0x8000 00:15:30.411 nvme2n3 : 5.88 106.16 6.63 0.00 0.00 1060556.81 103483.92 1494564.22 00:15:30.411 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:30.411 Verification LBA range: start 0x8000 length 0x8000 00:15:30.411 nvme2n3 : 5.86 170.54 10.66 0.00 0.00 665227.37 31823.59 1538521.99 00:15:30.411 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:30.411 Verification LBA range: start 0x0 length 0x2000 00:15:30.411 nvme3n1 : 5.88 116.94 7.31 0.00 0.00 945258.38 9730.24 1699700.49 00:15:30.411 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:30.411 Verification LBA range: start 0x2000 length 0x2000 00:15:30.411 nvme3n1 : 5.87 166.22 10.39 0.00 0.00 670343.46 13164.44 2124625.61 00:15:30.411 =================================================================================================================== 00:15:30.411 Total : 1737.63 108.60 0.00 0.00 806002.76 9730.24 2124625.61 00:15:31.349 00:15:31.349 real 0m8.916s 00:15:31.349 user 0m15.786s 00:15:31.349 sys 0m0.632s 00:15:31.349 19:29:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:31.349 19:29:57 -- common/autotest_common.sh@10 -- # set +x 00:15:31.349 ************************************ 00:15:31.349 END TEST bdev_verify_big_io 00:15:31.349 ************************************ 00:15:31.609 19:29:57 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:31.609 19:29:57 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:15:31.609 19:29:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:31.609 19:29:57 -- common/autotest_common.sh@10 -- # set +x 00:15:31.609 ************************************ 00:15:31.609 START TEST bdev_write_zeroes 00:15:31.609 ************************************ 00:15:31.609 19:29:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:31.609 [2024-04-24 19:29:57.244025] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:31.609 [2024-04-24 19:29:57.244149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75535 ] 00:15:31.869 [2024-04-24 19:29:57.411527] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.128 [2024-04-24 19:29:57.669778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.698 Running I/O for 1 seconds... 00:15:33.638 00:15:33.638 Latency(us) 00:15:33.638 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:33.638 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:33.638 nvme0n1 : 1.01 11307.97 44.17 0.00 0.00 11304.75 7440.77 26557.82 00:15:33.638 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:33.638 nvme1n1 : 1.01 13001.87 50.79 0.00 0.00 9804.94 3004.93 22322.31 00:15:33.638 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:33.638 nvme2n1 : 1.01 11282.11 44.07 0.00 0.00 11282.53 6954.26 28274.92 00:15:33.638 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:33.638 nvme2n2 : 1.01 11268.69 44.02 0.00 0.00 11287.53 7040.11 27015.71 00:15:33.638 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:33.638 nvme2n3 : 1.01 11255.57 43.97 0.00 0.00 11293.06 7183.20 25870.98 00:15:33.638 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:33.638 nvme3n1 : 1.02 11317.96 44.21 0.00 0.00 11223.74 4035.19 26214.40 00:15:33.638 =================================================================================================================== 00:15:33.638 Total : 69434.17 271.23 0.00 0.00 11001.80 3004.93 28274.92 00:15:35.020 00:15:35.020 real 0m3.536s 00:15:35.020 user 0m2.784s 00:15:35.020 sys 0m0.584s 00:15:35.020 19:30:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:35.020 19:30:00 -- common/autotest_common.sh@10 -- # set +x 00:15:35.020 ************************************ 00:15:35.020 END TEST bdev_write_zeroes 00:15:35.020 ************************************ 00:15:35.279 19:30:00 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:35.279 19:30:00 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:15:35.279 19:30:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:35.279 19:30:00 -- common/autotest_common.sh@10 -- # set +x 00:15:35.279 ************************************ 00:15:35.279 START TEST bdev_json_nonenclosed 00:15:35.279 ************************************ 00:15:35.279 19:30:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:35.279 [2024-04-24 19:30:00.921999] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:35.279 [2024-04-24 19:30:00.922152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75598 ] 00:15:35.539 [2024-04-24 19:30:01.080579] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.797 [2024-04-24 19:30:01.344292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.797 [2024-04-24 19:30:01.344387] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:35.798 [2024-04-24 19:30:01.344411] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:35.798 [2024-04-24 19:30:01.344422] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:36.364 00:15:36.364 real 0m1.008s 00:15:36.364 user 0m0.778s 00:15:36.364 sys 0m0.124s 00:15:36.364 19:30:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:36.364 19:30:01 -- common/autotest_common.sh@10 -- # set +x 00:15:36.365 ************************************ 00:15:36.365 END TEST bdev_json_nonenclosed 00:15:36.365 ************************************ 00:15:36.365 19:30:01 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:36.365 19:30:01 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:15:36.365 19:30:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:36.365 19:30:01 -- common/autotest_common.sh@10 -- # set +x 00:15:36.365 ************************************ 00:15:36.365 START TEST bdev_json_nonarray 00:15:36.365 ************************************ 00:15:36.365 19:30:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:36.622 [2024-04-24 19:30:02.068433] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:36.622 [2024-04-24 19:30:02.068613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75633 ] 00:15:36.622 [2024-04-24 19:30:02.240888] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.880 [2024-04-24 19:30:02.505154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.880 [2024-04-24 19:30:02.505274] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:36.880 [2024-04-24 19:30:02.505302] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:36.880 [2024-04-24 19:30:02.505314] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:37.449 00:15:37.449 real 0m1.057s 00:15:37.449 user 0m0.812s 00:15:37.449 sys 0m0.137s 00:15:37.449 19:30:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:37.449 19:30:03 -- common/autotest_common.sh@10 -- # set +x 00:15:37.449 ************************************ 00:15:37.449 END TEST bdev_json_nonarray 00:15:37.449 ************************************ 00:15:37.449 19:30:03 -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:15:37.449 19:30:03 -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:15:37.449 19:30:03 -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:15:37.449 19:30:03 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:15:37.449 19:30:03 -- bdev/blockdev.sh@811 -- # cleanup 00:15:37.449 19:30:03 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:37.449 19:30:03 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:37.449 19:30:03 -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:37.449 19:30:03 -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:37.449 19:30:03 -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:37.449 19:30:03 -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:37.449 19:30:03 -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:38.387 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:48.406 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:52.596 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:53.527 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:53.527 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:53.527 00:15:53.527 real 1m21.924s 00:15:53.527 user 1m47.463s 00:15:53.527 sys 1m0.122s 00:15:53.527 19:30:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:53.527 19:30:19 -- common/autotest_common.sh@10 -- # set +x 00:15:53.527 ************************************ 00:15:53.527 END TEST blockdev_xnvme 00:15:53.527 ************************************ 00:15:53.527 19:30:19 -- spdk/autotest.sh@249 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:53.527 19:30:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:53.527 19:30:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:53.527 19:30:19 -- common/autotest_common.sh@10 -- # set +x 00:15:53.527 ************************************ 00:15:53.527 START TEST ublk 00:15:53.527 ************************************ 00:15:53.527 19:30:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:53.785 * Looking for test storage... 00:15:53.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:53.785 19:30:19 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:53.785 19:30:19 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:53.785 19:30:19 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:53.785 19:30:19 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:53.785 19:30:19 -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:53.785 19:30:19 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:53.785 19:30:19 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:53.785 19:30:19 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:53.785 19:30:19 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:53.785 19:30:19 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:53.785 19:30:19 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:53.785 19:30:19 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:53.785 19:30:19 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:53.785 19:30:19 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:53.785 19:30:19 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:53.785 19:30:19 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:53.785 19:30:19 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:53.785 19:30:19 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:53.785 19:30:19 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:53.785 19:30:19 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:53.785 19:30:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:53.785 19:30:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:53.785 19:30:19 -- common/autotest_common.sh@10 -- # set +x 00:15:53.785 ************************************ 00:15:53.785 START TEST test_save_ublk_config 00:15:53.785 ************************************ 00:15:53.785 19:30:19 -- common/autotest_common.sh@1111 -- # test_save_config 00:15:53.785 19:30:19 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:53.785 19:30:19 -- ublk/ublk.sh@103 -- # tgtpid=76002 00:15:53.785 19:30:19 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:53.785 19:30:19 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:53.785 19:30:19 -- ublk/ublk.sh@106 -- # waitforlisten 76002 00:15:53.785 19:30:19 -- common/autotest_common.sh@817 -- # '[' -z 76002 ']' 00:15:53.785 19:30:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.785 19:30:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:53.785 19:30:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.785 19:30:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:53.785 19:30:19 -- common/autotest_common.sh@10 -- # set +x 00:15:53.785 [2024-04-24 19:30:19.431027] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:53.785 [2024-04-24 19:30:19.431148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76002 ] 00:15:54.114 [2024-04-24 19:30:19.594068] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.386 [2024-04-24 19:30:19.883576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.763 19:30:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:55.763 19:30:21 -- common/autotest_common.sh@850 -- # return 0 00:15:55.763 19:30:21 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:55.763 19:30:21 -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:55.763 19:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.763 19:30:21 -- common/autotest_common.sh@10 -- # set +x 00:15:55.763 [2024-04-24 19:30:21.029217] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:55.763 malloc0 00:15:55.763 [2024-04-24 19:30:21.138086] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:55.764 [2024-04-24 19:30:21.138223] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:55.764 [2024-04-24 19:30:21.138245] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:55.764 [2024-04-24 19:30:21.138262] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:55.764 [2024-04-24 19:30:21.146935] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:55.764 [2024-04-24 19:30:21.147011] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:55.764 [2024-04-24 19:30:21.147925] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:55.764 [2024-04-24 19:30:21.148399] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:55.764 [2024-04-24 19:30:21.174683] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:55.764 0 00:15:55.764 19:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.764 19:30:21 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:55.764 19:30:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:55.764 19:30:21 -- common/autotest_common.sh@10 -- # set +x 00:15:55.764 19:30:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:55.764 19:30:21 -- ublk/ublk.sh@115 -- # config='{ 00:15:55.764 "subsystems": [ 00:15:55.764 { 00:15:55.764 "subsystem": "keyring", 00:15:55.764 "config": [] 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "subsystem": "iobuf", 00:15:55.764 "config": [ 00:15:55.764 { 00:15:55.764 "method": "iobuf_set_options", 00:15:55.764 "params": { 00:15:55.764 "small_pool_count": 8192, 00:15:55.764 "large_pool_count": 1024, 00:15:55.764 "small_bufsize": 8192, 00:15:55.764 "large_bufsize": 135168 00:15:55.764 } 00:15:55.764 } 00:15:55.764 ] 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "subsystem": "sock", 00:15:55.764 "config": [ 00:15:55.764 { 00:15:55.764 "method": "sock_impl_set_options", 00:15:55.764 "params": { 00:15:55.764 "impl_name": "posix", 00:15:55.764 "recv_buf_size": 2097152, 00:15:55.764 "send_buf_size": 2097152, 00:15:55.764 "enable_recv_pipe": true, 00:15:55.764 "enable_quickack": false, 00:15:55.764 "enable_placement_id": 0, 00:15:55.764 "enable_zerocopy_send_server": true, 00:15:55.764 "enable_zerocopy_send_client": false, 00:15:55.764 "zerocopy_threshold": 0, 00:15:55.764 "tls_version": 0, 00:15:55.764 "enable_ktls": false 00:15:55.764 } 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "method": "sock_impl_set_options", 00:15:55.764 "params": { 00:15:55.764 "impl_name": "ssl", 00:15:55.764 "recv_buf_size": 4096, 00:15:55.764 "send_buf_size": 4096, 00:15:55.764 "enable_recv_pipe": true, 00:15:55.764 "enable_quickack": false, 00:15:55.764 "enable_placement_id": 0, 00:15:55.764 "enable_zerocopy_send_server": true, 00:15:55.764 "enable_zerocopy_send_client": false, 00:15:55.764 "zerocopy_threshold": 0, 00:15:55.764 "tls_version": 0, 00:15:55.764 "enable_ktls": false 00:15:55.764 } 00:15:55.764 } 00:15:55.764 ] 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "subsystem": "vmd", 00:15:55.764 "config": [] 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "subsystem": "accel", 00:15:55.764 "config": [ 00:15:55.764 { 00:15:55.764 "method": "accel_set_options", 00:15:55.764 "params": { 00:15:55.764 "small_cache_size": 128, 00:15:55.764 "large_cache_size": 16, 00:15:55.764 "task_count": 2048, 00:15:55.764 "sequence_count": 2048, 00:15:55.764 "buf_count": 2048 00:15:55.764 } 00:15:55.764 } 00:15:55.764 ] 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "subsystem": "bdev", 00:15:55.764 "config": [ 00:15:55.764 { 00:15:55.764 "method": "bdev_set_options", 00:15:55.764 "params": { 00:15:55.764 "bdev_io_pool_size": 65535, 00:15:55.764 "bdev_io_cache_size": 256, 00:15:55.764 "bdev_auto_examine": true, 00:15:55.764 "iobuf_small_cache_size": 128, 00:15:55.764 "iobuf_large_cache_size": 16 00:15:55.764 } 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "method": "bdev_raid_set_options", 00:15:55.764 "params": { 00:15:55.764 "process_window_size_kb": 1024 00:15:55.764 } 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "method": "bdev_iscsi_set_options", 00:15:55.764 "params": { 00:15:55.764 "timeout_sec": 30 00:15:55.764 } 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "method": "bdev_nvme_set_options", 00:15:55.764 "params": { 00:15:55.764 "action_on_timeout": "none", 00:15:55.764 "timeout_us": 0, 00:15:55.764 "timeout_admin_us": 0, 00:15:55.764 "keep_alive_timeout_ms": 10000, 00:15:55.764 "arbitration_burst": 0, 00:15:55.764 "low_priority_weight": 0, 00:15:55.764 "medium_priority_weight": 0, 00:15:55.764 "high_priority_weight": 0, 00:15:55.764 "nvme_adminq_poll_period_us": 10000, 00:15:55.764 "nvme_ioq_poll_period_us": 0, 00:15:55.764 "io_queue_requests": 0, 00:15:55.764 "delay_cmd_submit": true, 00:15:55.764 "transport_retry_count": 4, 00:15:55.764 "bdev_retry_count": 3, 00:15:55.764 "transport_ack_timeout": 0, 00:15:55.764 "ctrlr_loss_timeout_sec": 0, 00:15:55.764 "reconnect_delay_sec": 0, 00:15:55.764 "fast_io_fail_timeout_sec": 0, 00:15:55.764 "disable_auto_failback": false, 00:15:55.764 "generate_uuids": false, 00:15:55.764 "transport_tos": 0, 00:15:55.764 "nvme_error_stat": false, 00:15:55.764 "rdma_srq_size": 0, 00:15:55.764 "io_path_stat": false, 00:15:55.764 "allow_accel_sequence": false, 00:15:55.764 "rdma_max_cq_size": 0, 00:15:55.764 "rdma_cm_event_timeout_ms": 0, 00:15:55.764 "dhchap_digests": [ 00:15:55.764 "sha256", 00:15:55.764 "sha384", 00:15:55.764 "sha512" 00:15:55.764 ], 00:15:55.764 "dhchap_dhgroups": [ 00:15:55.764 "null", 00:15:55.764 "ffdhe2048", 00:15:55.764 "ffdhe3072", 00:15:55.764 "ffdhe4096", 00:15:55.764 "ffdhe6144", 00:15:55.764 "ffdhe8192" 00:15:55.764 ] 00:15:55.764 } 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "method": "bdev_nvme_set_hotplug", 00:15:55.764 "params": { 00:15:55.764 "period_us": 100000, 00:15:55.764 "enable": false 00:15:55.764 } 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "method": "bdev_malloc_create", 00:15:55.764 "params": { 00:15:55.764 "name": "malloc0", 00:15:55.764 "num_blocks": 8192, 00:15:55.764 "block_size": 4096, 00:15:55.764 "physical_block_size": 4096, 00:15:55.764 "uuid": "3c58d4b0-ad2e-446a-9f47-fe6504a5cfd5", 00:15:55.764 "optimal_io_boundary": 0 00:15:55.764 } 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "method": "bdev_wait_for_examine" 00:15:55.764 } 00:15:55.764 ] 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "subsystem": "scsi", 00:15:55.764 "config": null 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "subsystem": "scheduler", 00:15:55.764 "config": [ 00:15:55.764 { 00:15:55.764 "method": "framework_set_scheduler", 00:15:55.764 "params": { 00:15:55.764 "name": "static" 00:15:55.764 } 00:15:55.764 } 00:15:55.764 ] 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "subsystem": "vhost_scsi", 00:15:55.764 "config": [] 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "subsystem": "vhost_blk", 00:15:55.764 "config": [] 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "subsystem": "ublk", 00:15:55.764 "config": [ 00:15:55.764 { 00:15:55.764 "method": "ublk_create_target", 00:15:55.764 "params": { 00:15:55.764 "cpumask": "1" 00:15:55.764 } 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "method": "ublk_start_disk", 00:15:55.764 "params": { 00:15:55.764 "bdev_name": "malloc0", 00:15:55.764 "ublk_id": 0, 00:15:55.764 "num_queues": 1, 00:15:55.764 "queue_depth": 128 00:15:55.764 } 00:15:55.764 } 00:15:55.764 ] 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "subsystem": "nbd", 00:15:55.764 "config": [] 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "subsystem": "nvmf", 00:15:55.764 "config": [ 00:15:55.764 { 00:15:55.764 "method": "nvmf_set_config", 00:15:55.764 "params": { 00:15:55.764 "discovery_filter": "match_any", 00:15:55.764 "admin_cmd_passthru": { 00:15:55.764 "identify_ctrlr": false 00:15:55.764 } 00:15:55.764 } 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "method": "nvmf_set_max_subsystems", 00:15:55.764 "params": { 00:15:55.764 "max_subsystems": 1024 00:15:55.764 } 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "method": "nvmf_set_crdt", 00:15:55.764 "params": { 00:15:55.764 "crdt1": 0, 00:15:55.764 "crdt2": 0, 00:15:55.764 "crdt3": 0 00:15:55.764 } 00:15:55.764 } 00:15:55.764 ] 00:15:55.764 }, 00:15:55.764 { 00:15:55.764 "subsystem": "iscsi", 00:15:55.764 "config": [ 00:15:55.764 { 00:15:55.765 "method": "iscsi_set_options", 00:15:55.765 "params": { 00:15:55.765 "node_base": "iqn.2016-06.io.spdk", 00:15:55.765 "max_sessions": 128, 00:15:55.765 "max_connections_per_session": 2, 00:15:55.765 "max_queue_depth": 64, 00:15:55.765 "default_time2wait": 2, 00:15:55.765 "default_time2retain": 20, 00:15:55.765 "first_burst_length": 8192, 00:15:55.765 "immediate_data": true, 00:15:55.765 "allow_duplicated_isid": false, 00:15:55.765 "error_recovery_level": 0, 00:15:55.765 "nop_timeout": 60, 00:15:55.765 "nop_in_interval": 30, 00:15:55.765 "disable_chap": false, 00:15:55.765 "require_chap": false, 00:15:55.765 "mutual_chap": false, 00:15:55.765 "chap_group": 0, 00:15:55.765 "max_large_datain_per_connection": 64, 00:15:55.765 "max_r2t_per_connection": 4, 00:15:55.765 "pdu_pool_size": 36864, 00:15:55.765 "immediate_data_pool_size": 16384, 00:15:55.765 "data_out_pool_size": 2048 00:15:55.765 } 00:15:55.765 } 00:15:55.765 ] 00:15:55.765 } 00:15:55.765 ] 00:15:55.765 }' 00:15:55.765 19:30:21 -- ublk/ublk.sh@116 -- # killprocess 76002 00:15:55.765 19:30:21 -- common/autotest_common.sh@936 -- # '[' -z 76002 ']' 00:15:55.765 19:30:21 -- common/autotest_common.sh@940 -- # kill -0 76002 00:15:55.765 19:30:21 -- common/autotest_common.sh@941 -- # uname 00:15:55.765 19:30:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:55.765 19:30:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76002 00:15:56.024 killing process with pid 76002 00:15:56.024 19:30:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:56.024 19:30:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:56.024 19:30:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76002' 00:15:56.024 19:30:21 -- common/autotest_common.sh@955 -- # kill 76002 00:15:56.024 19:30:21 -- common/autotest_common.sh@960 -- # wait 76002 00:15:57.397 [2024-04-24 19:30:23.040189] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:57.655 [2024-04-24 19:30:23.073770] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:57.655 [2024-04-24 19:30:23.073984] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:57.655 [2024-04-24 19:30:23.080723] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:57.655 [2024-04-24 19:30:23.080808] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:57.655 [2024-04-24 19:30:23.080817] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:57.655 [2024-04-24 19:30:23.080851] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:57.655 [2024-04-24 19:30:23.081084] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:59.037 19:30:24 -- ublk/ublk.sh@119 -- # tgtpid=76068 00:15:59.037 19:30:24 -- ublk/ublk.sh@121 -- # waitforlisten 76068 00:15:59.037 19:30:24 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:59.037 19:30:24 -- common/autotest_common.sh@817 -- # '[' -z 76068 ']' 00:15:59.037 19:30:24 -- ublk/ublk.sh@118 -- # echo '{ 00:15:59.037 "subsystems": [ 00:15:59.037 { 00:15:59.037 "subsystem": "keyring", 00:15:59.037 "config": [] 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "subsystem": "iobuf", 00:15:59.037 "config": [ 00:15:59.037 { 00:15:59.037 "method": "iobuf_set_options", 00:15:59.037 "params": { 00:15:59.037 "small_pool_count": 8192, 00:15:59.037 "large_pool_count": 1024, 00:15:59.037 "small_bufsize": 8192, 00:15:59.037 "large_bufsize": 135168 00:15:59.037 } 00:15:59.037 } 00:15:59.037 ] 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "subsystem": "sock", 00:15:59.037 "config": [ 00:15:59.037 { 00:15:59.037 "method": "sock_impl_set_options", 00:15:59.037 "params": { 00:15:59.037 "impl_name": "posix", 00:15:59.037 "recv_buf_size": 2097152, 00:15:59.037 "send_buf_size": 2097152, 00:15:59.037 "enable_recv_pipe": true, 00:15:59.037 "enable_quickack": false, 00:15:59.037 "enable_placement_id": 0, 00:15:59.037 "enable_zerocopy_send_server": true, 00:15:59.037 "enable_zerocopy_send_client": false, 00:15:59.037 "zerocopy_threshold": 0, 00:15:59.037 "tls_version": 0, 00:15:59.037 "enable_ktls": false 00:15:59.037 } 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "method": "sock_impl_set_options", 00:15:59.037 "params": { 00:15:59.037 "impl_name": "ssl", 00:15:59.037 "recv_buf_size": 4096, 00:15:59.037 "send_buf_size": 4096, 00:15:59.037 "enable_recv_pipe": true, 00:15:59.037 "enable_quickack": false, 00:15:59.037 "enable_placement_id": 0, 00:15:59.037 "enable_zerocopy_send_server": true, 00:15:59.037 "enable_zerocopy_send_client": false, 00:15:59.037 "zerocopy_threshold": 0, 00:15:59.037 "tls_version": 0, 00:15:59.037 "enable_ktls": false 00:15:59.037 } 00:15:59.037 } 00:15:59.037 ] 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "subsystem": "vmd", 00:15:59.037 "config": [] 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "subsystem": "accel", 00:15:59.037 "config": [ 00:15:59.037 { 00:15:59.037 "method": "accel_set_options", 00:15:59.037 "params": { 00:15:59.037 "small_cache_size": 128, 00:15:59.037 "large_cache_size": 16, 00:15:59.037 "task_count": 2048, 00:15:59.037 "sequence_count": 2048, 00:15:59.037 "buf_count": 2048 00:15:59.037 } 00:15:59.037 } 00:15:59.037 ] 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "subsystem": "bdev", 00:15:59.037 "config": [ 00:15:59.037 { 00:15:59.037 "method": "bdev_set_options", 00:15:59.037 "params": { 00:15:59.037 "bdev_io_pool_size": 65535, 00:15:59.037 "bdev_io_cache_size": 256, 00:15:59.037 "bdev_auto_examine": true, 00:15:59.037 "iobuf_small_cache_size": 128, 00:15:59.037 "iobuf_large_cache_size": 16 00:15:59.037 } 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "method": "bdev_raid_set_options", 00:15:59.037 "params": { 00:15:59.037 "process_window_size_kb": 1024 00:15:59.037 } 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "method": "bdev_iscsi_set_options", 00:15:59.037 "params": { 00:15:59.037 "timeout_sec": 30 00:15:59.037 } 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "method": "bdev_nvme_set_options", 00:15:59.037 "params": { 00:15:59.037 "action_on_timeout": "none", 00:15:59.037 "timeout_us": 0, 00:15:59.037 "timeout_admin_us": 0, 00:15:59.037 "keep_alive_timeout_ms": 10000, 00:15:59.037 "arbitration_burst": 0, 00:15:59.037 "low_priority_weight": 0, 00:15:59.037 "medium_priority_weight": 0, 00:15:59.037 "high_priority_weight": 0, 00:15:59.037 "nvme_adminq_poll_period_us": 10000, 00:15:59.037 "nvme_ioq_poll_period_us": 0, 00:15:59.037 "io_queue_requests": 0, 00:15:59.037 "delay_cmd_submit": true, 00:15:59.037 "transport_retry_count": 4, 00:15:59.037 "bdev_retry_count": 3, 00:15:59.037 "transport_ack_timeout": 0, 00:15:59.037 "ctrlr_loss_timeout_sec": 0, 00:15:59.037 "reconnect_delay_sec": 0, 00:15:59.037 "fast_io_fail_timeout_sec": 0, 00:15:59.037 "disable_auto_failback": false, 00:15:59.037 "generate_uuids": false, 00:15:59.037 "transport_tos": 0, 00:15:59.037 "nvme_error_stat": false, 00:15:59.037 "rdma_srq_size": 0, 00:15:59.037 "io_path_stat": false, 00:15:59.037 "allow_accel_sequence": false, 00:15:59.037 "rdma_max_cq_size": 0, 00:15:59.037 "rdma_cm_event_timeout_ms": 0, 00:15:59.037 "dhchap_digests": [ 00:15:59.037 "sha256", 00:15:59.037 "sha384", 00:15:59.037 "sha512" 00:15:59.037 ], 00:15:59.037 "dhchap_dhgroups": [ 00:15:59.037 "null", 00:15:59.037 "ffdhe2048", 00:15:59.037 "ffdhe3072", 00:15:59.037 "ffdhe4096", 00:15:59.037 "ffdhe6144", 00:15:59.037 "ffdhe8192" 00:15:59.037 ] 00:15:59.037 } 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "method": "bdev_nvme_set_hotplug", 00:15:59.037 "params": { 00:15:59.037 "period_us": 100000, 00:15:59.037 "enable": false 00:15:59.037 } 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "method": "bdev_malloc_create", 00:15:59.037 "params": { 00:15:59.037 "name": "malloc0", 00:15:59.037 "num_blocks": 8192, 00:15:59.037 "block_size": 4096, 00:15:59.037 "physical_block_size": 4096, 00:15:59.037 "uuid": "3c58d4b0-ad2e-446a-9f47-fe6504a5cfd5", 00:15:59.037 "optimal_io_boundary": 0 00:15:59.037 } 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "method": "bdev_wait_for_examine" 00:15:59.037 } 00:15:59.037 ] 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "subsystem": "scsi", 00:15:59.037 "config": null 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "subsystem": "scheduler", 00:15:59.037 "config": [ 00:15:59.037 { 00:15:59.037 "method": "framework_set_scheduler", 00:15:59.037 "params": { 00:15:59.037 "name": "static" 00:15:59.037 } 00:15:59.037 } 00:15:59.037 ] 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "subsystem": "vhost_scsi", 00:15:59.037 "config": [] 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "subsystem": "vhost_blk", 00:15:59.037 "config": [] 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "subsystem": "ublk", 00:15:59.037 "config": [ 00:15:59.037 { 00:15:59.037 "method": "ublk_create_target", 00:15:59.037 "params": { 00:15:59.037 "cpumask": "1" 00:15:59.037 } 00:15:59.037 }, 00:15:59.037 { 00:15:59.037 "method": "ublk_start_disk", 00:15:59.037 "params": { 00:15:59.037 "bdev_name": "malloc0", 00:15:59.037 "ublk_id": 0, 00:15:59.037 "num_queues": 1, 00:15:59.037 "queue_depth": 128 00:15:59.038 } 00:15:59.038 } 00:15:59.038 ] 00:15:59.038 }, 00:15:59.038 { 00:15:59.038 "subsystem": "nbd", 00:15:59.038 "config": [] 00:15:59.038 }, 00:15:59.038 { 00:15:59.038 "subsystem": "nvmf", 00:15:59.038 "config": [ 00:15:59.038 { 00:15:59.038 "method": "nvmf_set_config", 00:15:59.038 "params": { 00:15:59.038 "discovery_filter": "match_any", 00:15:59.038 "admin_cmd_passthru": { 00:15:59.038 "identify_ctrlr": false 00:15:59.038 } 00:15:59.038 } 00:15:59.038 }, 00:15:59.038 { 00:15:59.038 "method": "nvmf_set_max_subsystems", 00:15:59.038 "params": { 00:15:59.038 "max_subsystems": 1024 00:15:59.038 } 00:15:59.038 }, 00:15:59.038 { 00:15:59.038 "method": "nvmf_set_crdt", 00:15:59.038 "params": { 00:15:59.038 "crdt1": 0, 00:15:59.038 "crdt2": 0, 00:15:59.038 "crdt3": 0 00:15:59.038 } 00:15:59.038 } 00:15:59.038 ] 00:15:59.038 }, 00:15:59.038 { 00:15:59.038 "subsystem": "iscsi", 00:15:59.038 "config": [ 00:15:59.038 { 00:15:59.038 "method": "iscsi_set_options", 00:15:59.038 "params": { 00:15:59.038 "node_base": "iqn.2016-06.io.spdk", 00:15:59.038 "max_sessions": 128, 00:15:59.038 "max_connections_per_session": 2, 00:15:59.038 "max_queue_depth": 64, 00:15:59.038 "default_time2wait": 2, 00:15:59.038 "default_time2retain": 20, 00:15:59.038 "first_burst_length": 8192, 00:15:59.038 "immediate_data": true, 00:15:59.038 "allow_duplicated_isid": false, 00:15:59.038 "error_recovery_level": 0, 00:15:59.038 "nop_timeout": 60, 00:15:59.038 "nop_in_interval": 30, 00:15:59.038 "disable_chap": false, 00:15:59.038 "require_chap": false, 00:15:59.038 "mutual_chap": false, 00:15:59.038 "chap_group": 0, 00:15:59.038 "max_large_datain_per_connection": 64, 00:15:59.038 "max_r2t_per_connection": 4, 00:15:59.038 "pdu_pool_size": 36864, 00:15:59.038 "immediate_data_pool_size": 16384, 00:15:59.038 "data_out_pool_size": 2048 00:15:59.038 } 00:15:59.038 } 00:15:59.038 ] 00:15:59.038 } 00:15:59.038 ] 00:15:59.038 }' 00:15:59.038 19:30:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.038 19:30:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:59.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.038 19:30:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.038 19:30:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:59.038 19:30:24 -- common/autotest_common.sh@10 -- # set +x 00:15:59.358 [2024-04-24 19:30:24.715572] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:15:59.358 [2024-04-24 19:30:24.715706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76068 ] 00:15:59.358 [2024-04-24 19:30:24.883818] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.642 [2024-04-24 19:30:25.155355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.018 [2024-04-24 19:30:26.338080] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:01.018 [2024-04-24 19:30:26.344795] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:01.018 [2024-04-24 19:30:26.344895] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:01.018 [2024-04-24 19:30:26.344906] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:01.018 [2024-04-24 19:30:26.344913] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:01.018 [2024-04-24 19:30:26.352755] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:01.018 [2024-04-24 19:30:26.352780] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:01.018 [2024-04-24 19:30:26.359673] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:01.018 [2024-04-24 19:30:26.359795] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:01.018 [2024-04-24 19:30:26.375669] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:01.018 19:30:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:01.018 19:30:26 -- common/autotest_common.sh@850 -- # return 0 00:16:01.018 19:30:26 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:01.018 19:30:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:01.018 19:30:26 -- common/autotest_common.sh@10 -- # set +x 00:16:01.018 19:30:26 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:01.018 19:30:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:01.018 19:30:26 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:01.018 19:30:26 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:01.018 19:30:26 -- ublk/ublk.sh@125 -- # killprocess 76068 00:16:01.018 19:30:26 -- common/autotest_common.sh@936 -- # '[' -z 76068 ']' 00:16:01.018 19:30:26 -- common/autotest_common.sh@940 -- # kill -0 76068 00:16:01.018 19:30:26 -- common/autotest_common.sh@941 -- # uname 00:16:01.018 19:30:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:01.018 19:30:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76068 00:16:01.018 killing process with pid 76068 00:16:01.018 19:30:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:01.018 19:30:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:01.018 19:30:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76068' 00:16:01.018 19:30:26 -- common/autotest_common.sh@955 -- # kill 76068 00:16:01.018 19:30:26 -- common/autotest_common.sh@960 -- # wait 76068 00:16:02.922 [2024-04-24 19:30:28.190337] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:02.922 [2024-04-24 19:30:28.222675] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:02.922 [2024-04-24 19:30:28.222883] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:02.922 [2024-04-24 19:30:28.232667] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:02.922 [2024-04-24 19:30:28.232743] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:02.922 [2024-04-24 19:30:28.232752] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:02.922 [2024-04-24 19:30:28.232780] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:02.922 [2024-04-24 19:30:28.232987] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:04.300 19:30:29 -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:04.300 00:16:04.301 real 0m10.474s 00:16:04.301 user 0m9.156s 00:16:04.301 sys 0m2.051s 00:16:04.301 19:30:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:04.301 19:30:29 -- common/autotest_common.sh@10 -- # set +x 00:16:04.301 ************************************ 00:16:04.301 END TEST test_save_ublk_config 00:16:04.301 ************************************ 00:16:04.301 19:30:29 -- ublk/ublk.sh@139 -- # spdk_pid=76159 00:16:04.301 19:30:29 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:04.301 19:30:29 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:04.301 19:30:29 -- ublk/ublk.sh@141 -- # waitforlisten 76159 00:16:04.301 19:30:29 -- common/autotest_common.sh@817 -- # '[' -z 76159 ']' 00:16:04.301 19:30:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.301 19:30:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:04.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.301 19:30:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.301 19:30:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:04.301 19:30:29 -- common/autotest_common.sh@10 -- # set +x 00:16:04.301 [2024-04-24 19:30:29.965332] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:16:04.301 [2024-04-24 19:30:29.965474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76159 ] 00:16:04.559 [2024-04-24 19:30:30.142462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:04.819 [2024-04-24 19:30:30.422741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.819 [2024-04-24 19:30:30.422748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.202 19:30:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:06.202 19:30:31 -- common/autotest_common.sh@850 -- # return 0 00:16:06.202 19:30:31 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:06.202 19:30:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:06.202 19:30:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:06.202 19:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:06.202 ************************************ 00:16:06.202 START TEST test_create_ublk 00:16:06.202 ************************************ 00:16:06.202 19:30:31 -- common/autotest_common.sh@1111 -- # test_create_ublk 00:16:06.202 19:30:31 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:06.202 19:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.202 19:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:06.202 [2024-04-24 19:30:31.597349] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:06.202 19:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.202 19:30:31 -- ublk/ublk.sh@33 -- # ublk_target= 00:16:06.202 19:30:31 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:06.202 19:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.202 19:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:06.461 19:30:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.461 19:30:31 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:06.461 19:30:31 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:06.461 19:30:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.461 19:30:31 -- common/autotest_common.sh@10 -- # set +x 00:16:06.461 [2024-04-24 19:30:31.991897] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:06.461 [2024-04-24 19:30:31.992337] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:06.461 [2024-04-24 19:30:31.992355] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:06.462 [2024-04-24 19:30:31.992366] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:06.462 [2024-04-24 19:30:31.999693] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:06.462 [2024-04-24 19:30:31.999737] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:06.462 [2024-04-24 19:30:32.007701] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:06.462 [2024-04-24 19:30:32.015932] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:06.462 [2024-04-24 19:30:32.033696] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:06.462 19:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.462 19:30:32 -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:06.462 19:30:32 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:06.462 19:30:32 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:06.462 19:30:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:06.462 19:30:32 -- common/autotest_common.sh@10 -- # set +x 00:16:06.462 19:30:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:06.462 19:30:32 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:06.462 { 00:16:06.462 "ublk_device": "/dev/ublkb0", 00:16:06.462 "id": 0, 00:16:06.462 "queue_depth": 512, 00:16:06.462 "num_queues": 4, 00:16:06.462 "bdev_name": "Malloc0" 00:16:06.462 } 00:16:06.462 ]' 00:16:06.462 19:30:32 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:06.462 19:30:32 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:06.462 19:30:32 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:06.721 19:30:32 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:06.721 19:30:32 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:06.721 19:30:32 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:06.721 19:30:32 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:06.721 19:30:32 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:06.721 19:30:32 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:06.721 19:30:32 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:06.721 19:30:32 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:06.721 19:30:32 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:06.721 19:30:32 -- lvol/common.sh@41 -- # local offset=0 00:16:06.721 19:30:32 -- lvol/common.sh@42 -- # local size=134217728 00:16:06.721 19:30:32 -- lvol/common.sh@43 -- # local rw=write 00:16:06.721 19:30:32 -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:06.721 19:30:32 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:06.721 19:30:32 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:06.721 19:30:32 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:06.721 19:30:32 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:06.721 19:30:32 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:06.721 19:30:32 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:06.721 fio: verification read phase will never start because write phase uses all of runtime 00:16:06.721 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:06.721 fio-3.35 00:16:06.721 Starting 1 process 00:16:18.935 00:16:18.935 fio_test: (groupid=0, jobs=1): err= 0: pid=76215: Wed Apr 24 19:30:42 2024 00:16:18.935 write: IOPS=13.3k, BW=52.1MiB/s (54.6MB/s)(521MiB/10001msec); 0 zone resets 00:16:18.935 clat (usec): min=39, max=12085, avg=74.07, stdev=136.90 00:16:18.935 lat (usec): min=39, max=12113, avg=74.60, stdev=136.95 00:16:18.935 clat percentiles (usec): 00:16:18.935 | 1.00th=[ 56], 5.00th=[ 58], 10.00th=[ 59], 20.00th=[ 62], 00:16:18.935 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 69], 00:16:18.935 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 79], 95.00th=[ 88], 00:16:18.936 | 99.00th=[ 108], 99.50th=[ 129], 99.90th=[ 2966], 99.95th=[ 3523], 00:16:18.936 | 99.99th=[ 4113] 00:16:18.936 bw ( KiB/s): min=18432, max=60208, per=99.99%, avg=53309.89, stdev=9102.25, samples=19 00:16:18.936 iops : min= 4608, max=15052, avg=13327.58, stdev=2275.48, samples=19 00:16:18.936 lat (usec) : 50=0.01%, 100=98.39%, 250=1.35%, 500=0.01%, 750=0.01% 00:16:18.936 lat (usec) : 1000=0.01% 00:16:18.936 lat (msec) : 2=0.06%, 4=0.14%, 10=0.02%, 20=0.01% 00:16:18.936 cpu : usr=2.42%, sys=8.34%, ctx=133299, majf=0, minf=797 00:16:18.936 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.936 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.936 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.936 issued rwts: total=0,133297,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.936 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.936 00:16:18.936 Run status group 0 (all jobs): 00:16:18.936 WRITE: bw=52.1MiB/s (54.6MB/s), 52.1MiB/s-52.1MiB/s (54.6MB/s-54.6MB/s), io=521MiB (546MB), run=10001-10001msec 00:16:18.936 00:16:18.936 Disk stats (read/write): 00:16:18.936 ublkb0: ios=0/131849, merge=0/0, ticks=0/8852, in_queue=8853, util=99.00% 00:16:18.936 19:30:42 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:18.936 19:30:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.936 19:30:42 -- common/autotest_common.sh@10 -- # set +x 00:16:18.936 [2024-04-24 19:30:42.521802] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:18.936 [2024-04-24 19:30:42.556728] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:18.936 [2024-04-24 19:30:42.557954] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:18.936 [2024-04-24 19:30:42.570724] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:18.936 [2024-04-24 19:30:42.571106] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:18.936 [2024-04-24 19:30:42.571125] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:18.936 19:30:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.936 19:30:42 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:18.936 19:30:42 -- common/autotest_common.sh@638 -- # local es=0 00:16:18.936 19:30:42 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:18.936 19:30:42 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:18.936 19:30:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:18.936 19:30:42 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:18.936 19:30:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:18.936 19:30:42 -- common/autotest_common.sh@641 -- # rpc_cmd ublk_stop_disk 0 00:16:18.936 19:30:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.936 19:30:42 -- common/autotest_common.sh@10 -- # set +x 00:16:18.936 [2024-04-24 19:30:42.592784] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:18.936 request: 00:16:18.936 { 00:16:18.936 "ublk_id": 0, 00:16:18.936 "method": "ublk_stop_disk", 00:16:18.936 "req_id": 1 00:16:18.936 } 00:16:18.936 Got JSON-RPC error response 00:16:18.936 response: 00:16:18.936 { 00:16:18.936 "code": -19, 00:16:18.936 "message": "No such device" 00:16:18.936 } 00:16:18.936 19:30:42 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:18.936 19:30:42 -- common/autotest_common.sh@641 -- # es=1 00:16:18.936 19:30:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:18.936 19:30:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:18.936 19:30:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:18.936 19:30:42 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:18.936 19:30:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.936 19:30:42 -- common/autotest_common.sh@10 -- # set +x 00:16:18.936 [2024-04-24 19:30:42.609771] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:18.936 [2024-04-24 19:30:42.617660] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:18.936 [2024-04-24 19:30:42.617703] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:18.936 19:30:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.936 19:30:42 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:18.936 19:30:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.936 19:30:42 -- common/autotest_common.sh@10 -- # set +x 00:16:18.936 19:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.936 19:30:43 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:18.936 19:30:43 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:18.936 19:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.936 19:30:43 -- common/autotest_common.sh@10 -- # set +x 00:16:18.936 19:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.936 19:30:43 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:18.936 19:30:43 -- lvol/common.sh@26 -- # jq length 00:16:18.936 19:30:43 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:18.936 19:30:43 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:18.936 19:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.936 19:30:43 -- common/autotest_common.sh@10 -- # set +x 00:16:18.936 19:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.936 19:30:43 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:18.936 19:30:43 -- lvol/common.sh@28 -- # jq length 00:16:18.936 19:30:43 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:18.936 00:16:18.936 real 0m11.555s 00:16:18.936 user 0m0.667s 00:16:18.936 sys 0m0.937s 00:16:18.936 19:30:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:18.936 19:30:43 -- common/autotest_common.sh@10 -- # set +x 00:16:18.936 ************************************ 00:16:18.936 END TEST test_create_ublk 00:16:18.936 ************************************ 00:16:18.936 19:30:43 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:18.936 19:30:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:18.936 19:30:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:18.936 19:30:43 -- common/autotest_common.sh@10 -- # set +x 00:16:18.936 ************************************ 00:16:18.936 START TEST test_create_multi_ublk 00:16:18.936 ************************************ 00:16:18.936 19:30:43 -- common/autotest_common.sh@1111 -- # test_create_multi_ublk 00:16:18.936 19:30:43 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:18.936 19:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.936 19:30:43 -- common/autotest_common.sh@10 -- # set +x 00:16:18.936 [2024-04-24 19:30:43.276012] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:18.936 19:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.936 19:30:43 -- ublk/ublk.sh@62 -- # ublk_target= 00:16:18.936 19:30:43 -- ublk/ublk.sh@64 -- # seq 0 3 00:16:18.936 19:30:43 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.936 19:30:43 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:18.936 19:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.936 19:30:43 -- common/autotest_common.sh@10 -- # set +x 00:16:18.936 19:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.936 19:30:43 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:18.936 19:30:43 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:18.936 19:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.936 19:30:43 -- common/autotest_common.sh@10 -- # set +x 00:16:18.936 [2024-04-24 19:30:43.651873] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:18.936 [2024-04-24 19:30:43.652328] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:18.936 [2024-04-24 19:30:43.652347] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:18.936 [2024-04-24 19:30:43.652357] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.936 [2024-04-24 19:30:43.659727] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.936 [2024-04-24 19:30:43.659780] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.936 [2024-04-24 19:30:43.667721] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.936 [2024-04-24 19:30:43.668512] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:18.936 [2024-04-24 19:30:43.677359] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.936 19:30:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.937 19:30:43 -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:18.937 19:30:43 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.937 19:30:43 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:18.937 19:30:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.937 19:30:43 -- common/autotest_common.sh@10 -- # set +x 00:16:18.937 19:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.937 19:30:44 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:18.937 19:30:44 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:18.937 19:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.937 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:16:18.937 [2024-04-24 19:30:44.053864] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:18.937 [2024-04-24 19:30:44.054341] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:18.937 [2024-04-24 19:30:44.054363] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:18.937 [2024-04-24 19:30:44.054371] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.937 [2024-04-24 19:30:44.068700] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.937 [2024-04-24 19:30:44.068738] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.937 [2024-04-24 19:30:44.079707] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.937 [2024-04-24 19:30:44.080473] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:18.937 [2024-04-24 19:30:44.094751] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.937 19:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.937 19:30:44 -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:18.937 19:30:44 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.937 19:30:44 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:18.937 19:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.937 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:16:18.937 19:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.937 19:30:44 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:18.937 19:30:44 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:18.937 19:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.937 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:16:18.937 [2024-04-24 19:30:44.469942] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:18.937 [2024-04-24 19:30:44.470488] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:18.937 [2024-04-24 19:30:44.470521] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:18.937 [2024-04-24 19:30:44.470537] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:18.937 [2024-04-24 19:30:44.477763] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:18.937 [2024-04-24 19:30:44.477834] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:18.937 [2024-04-24 19:30:44.485739] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:18.937 [2024-04-24 19:30:44.486828] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:18.937 [2024-04-24 19:30:44.502714] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:18.937 19:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:18.937 19:30:44 -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:18.937 19:30:44 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:18.937 19:30:44 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:18.937 19:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:18.937 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:16:19.505 19:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.505 19:30:44 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:19.505 19:30:44 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:19.505 19:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.505 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:16:19.505 [2024-04-24 19:30:44.909895] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:19.505 [2024-04-24 19:30:44.910350] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:19.505 [2024-04-24 19:30:44.910383] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:19.505 [2024-04-24 19:30:44.910392] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:19.505 [2024-04-24 19:30:44.917714] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:19.505 [2024-04-24 19:30:44.917748] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:19.505 [2024-04-24 19:30:44.925710] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:19.505 [2024-04-24 19:30:44.926540] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:19.505 [2024-04-24 19:30:44.942700] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:19.505 19:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.505 19:30:44 -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:19.505 19:30:44 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:19.505 19:30:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:19.505 19:30:44 -- common/autotest_common.sh@10 -- # set +x 00:16:19.505 19:30:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:19.505 19:30:44 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:19.505 { 00:16:19.505 "ublk_device": "/dev/ublkb0", 00:16:19.505 "id": 0, 00:16:19.505 "queue_depth": 512, 00:16:19.505 "num_queues": 4, 00:16:19.505 "bdev_name": "Malloc0" 00:16:19.505 }, 00:16:19.505 { 00:16:19.505 "ublk_device": "/dev/ublkb1", 00:16:19.505 "id": 1, 00:16:19.505 "queue_depth": 512, 00:16:19.505 "num_queues": 4, 00:16:19.505 "bdev_name": "Malloc1" 00:16:19.505 }, 00:16:19.505 { 00:16:19.505 "ublk_device": "/dev/ublkb2", 00:16:19.505 "id": 2, 00:16:19.505 "queue_depth": 512, 00:16:19.505 "num_queues": 4, 00:16:19.505 "bdev_name": "Malloc2" 00:16:19.505 }, 00:16:19.505 { 00:16:19.505 "ublk_device": "/dev/ublkb3", 00:16:19.505 "id": 3, 00:16:19.505 "queue_depth": 512, 00:16:19.505 "num_queues": 4, 00:16:19.505 "bdev_name": "Malloc3" 00:16:19.505 } 00:16:19.505 ]' 00:16:19.505 19:30:44 -- ublk/ublk.sh@72 -- # seq 0 3 00:16:19.505 19:30:44 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:19.505 19:30:44 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:19.505 19:30:45 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:19.505 19:30:45 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:19.505 19:30:45 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:19.505 19:30:45 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:19.505 19:30:45 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:19.505 19:30:45 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:19.765 19:30:45 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:19.765 19:30:45 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:19.765 19:30:45 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:19.765 19:30:45 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:19.765 19:30:45 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:19.765 19:30:45 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:19.765 19:30:45 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:19.765 19:30:45 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:19.765 19:30:45 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:19.765 19:30:45 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:19.765 19:30:45 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:19.765 19:30:45 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:19.765 19:30:45 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:20.024 19:30:45 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:20.024 19:30:45 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:20.024 19:30:45 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:20.024 19:30:45 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:20.024 19:30:45 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:20.024 19:30:45 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:20.024 19:30:45 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:20.024 19:30:45 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:20.024 19:30:45 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:20.024 19:30:45 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:20.024 19:30:45 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:20.283 19:30:45 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:20.283 19:30:45 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:20.283 19:30:45 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:20.283 19:30:45 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:20.283 19:30:45 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:20.283 19:30:45 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:20.283 19:30:45 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:20.283 19:30:45 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:20.283 19:30:45 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:20.283 19:30:45 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:20.283 19:30:45 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:20.283 19:30:45 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:20.283 19:30:45 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:20.283 19:30:45 -- ublk/ublk.sh@85 -- # seq 0 3 00:16:20.283 19:30:45 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:20.283 19:30:45 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:20.283 19:30:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.283 19:30:45 -- common/autotest_common.sh@10 -- # set +x 00:16:20.283 [2024-04-24 19:30:45.954822] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:20.542 [2024-04-24 19:30:45.989763] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:20.542 [2024-04-24 19:30:45.991236] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:20.542 [2024-04-24 19:30:45.997695] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:20.542 [2024-04-24 19:30:45.998059] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:20.542 [2024-04-24 19:30:45.998079] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:20.542 19:30:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.542 19:30:46 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:20.542 19:30:46 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:20.542 19:30:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.542 19:30:46 -- common/autotest_common.sh@10 -- # set +x 00:16:20.542 [2024-04-24 19:30:46.013847] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:20.542 [2024-04-24 19:30:46.047732] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:20.542 [2024-04-24 19:30:46.053169] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:20.542 [2024-04-24 19:30:46.057315] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:20.542 [2024-04-24 19:30:46.057681] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:20.542 [2024-04-24 19:30:46.057699] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:20.543 19:30:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.543 19:30:46 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:20.543 19:30:46 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:20.543 19:30:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.543 19:30:46 -- common/autotest_common.sh@10 -- # set +x 00:16:20.543 [2024-04-24 19:30:46.075826] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:20.543 [2024-04-24 19:30:46.106226] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:20.543 [2024-04-24 19:30:46.109024] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:20.543 [2024-04-24 19:30:46.115797] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:20.543 [2024-04-24 19:30:46.116183] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:20.543 [2024-04-24 19:30:46.116204] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:20.543 19:30:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.543 19:30:46 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:20.543 19:30:46 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:20.543 19:30:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.543 19:30:46 -- common/autotest_common.sh@10 -- # set +x 00:16:20.543 [2024-04-24 19:30:46.123814] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:20.543 [2024-04-24 19:30:46.168275] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:20.543 [2024-04-24 19:30:46.172072] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:20.543 [2024-04-24 19:30:46.178679] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:20.543 [2024-04-24 19:30:46.179108] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:20.543 [2024-04-24 19:30:46.179133] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:20.543 19:30:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:20.543 19:30:46 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:20.802 [2024-04-24 19:30:46.406864] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:20.802 [2024-04-24 19:30:46.416680] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:20.802 [2024-04-24 19:30:46.416748] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:20.802 19:30:46 -- ublk/ublk.sh@93 -- # seq 0 3 00:16:20.802 19:30:46 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:20.802 19:30:46 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:20.802 19:30:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:20.802 19:30:46 -- common/autotest_common.sh@10 -- # set +x 00:16:21.370 19:30:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.370 19:30:46 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:21.370 19:30:46 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:21.370 19:30:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.370 19:30:46 -- common/autotest_common.sh@10 -- # set +x 00:16:21.629 19:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.629 19:30:47 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:21.629 19:30:47 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:21.629 19:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.629 19:30:47 -- common/autotest_common.sh@10 -- # set +x 00:16:22.198 19:30:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.198 19:30:47 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:22.198 19:30:47 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:22.198 19:30:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.198 19:30:47 -- common/autotest_common.sh@10 -- # set +x 00:16:22.457 19:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.457 19:30:48 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:22.457 19:30:48 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:22.457 19:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.457 19:30:48 -- common/autotest_common.sh@10 -- # set +x 00:16:22.457 19:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.457 19:30:48 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:22.457 19:30:48 -- lvol/common.sh@26 -- # jq length 00:16:22.716 19:30:48 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:22.716 19:30:48 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:22.716 19:30:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.716 19:30:48 -- common/autotest_common.sh@10 -- # set +x 00:16:22.716 19:30:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.716 19:30:48 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:22.716 19:30:48 -- lvol/common.sh@28 -- # jq length 00:16:22.716 ************************************ 00:16:22.716 END TEST test_create_multi_ublk 00:16:22.716 ************************************ 00:16:22.716 19:30:48 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:22.716 00:16:22.716 real 0m4.967s 00:16:22.716 user 0m1.158s 00:16:22.716 sys 0m0.234s 00:16:22.716 19:30:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:22.716 19:30:48 -- common/autotest_common.sh@10 -- # set +x 00:16:22.716 19:30:48 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:22.716 19:30:48 -- ublk/ublk.sh@147 -- # cleanup 00:16:22.716 19:30:48 -- ublk/ublk.sh@130 -- # killprocess 76159 00:16:22.716 19:30:48 -- common/autotest_common.sh@936 -- # '[' -z 76159 ']' 00:16:22.716 19:30:48 -- common/autotest_common.sh@940 -- # kill -0 76159 00:16:22.716 19:30:48 -- common/autotest_common.sh@941 -- # uname 00:16:22.716 19:30:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.716 19:30:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76159 00:16:22.716 19:30:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:22.716 killing process with pid 76159 00:16:22.716 19:30:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:22.716 19:30:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76159' 00:16:22.716 19:30:48 -- common/autotest_common.sh@955 -- # kill 76159 00:16:22.716 19:30:48 -- common/autotest_common.sh@960 -- # wait 76159 00:16:24.225 [2024-04-24 19:30:49.681090] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:24.225 [2024-04-24 19:30:49.681167] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:25.605 00:16:25.605 real 0m32.037s 00:16:25.605 user 0m48.234s 00:16:25.605 sys 0m8.134s 00:16:25.605 19:30:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:25.605 19:30:51 -- common/autotest_common.sh@10 -- # set +x 00:16:25.605 ************************************ 00:16:25.605 END TEST ublk 00:16:25.605 ************************************ 00:16:25.605 19:30:51 -- spdk/autotest.sh@250 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:25.605 19:30:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:25.605 19:30:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:25.605 19:30:51 -- common/autotest_common.sh@10 -- # set +x 00:16:25.863 ************************************ 00:16:25.863 START TEST ublk_recovery 00:16:25.863 ************************************ 00:16:25.863 19:30:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:25.863 * Looking for test storage... 00:16:25.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:25.863 19:30:51 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:25.863 19:30:51 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:25.863 19:30:51 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:25.863 19:30:51 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:25.863 19:30:51 -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:25.863 19:30:51 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:25.863 19:30:51 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:25.863 19:30:51 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:25.863 19:30:51 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:25.863 19:30:51 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:25.863 19:30:51 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76590 00:16:25.863 19:30:51 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.863 19:30:51 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76590 00:16:25.863 19:30:51 -- common/autotest_common.sh@817 -- # '[' -z 76590 ']' 00:16:25.863 19:30:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.863 19:30:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:25.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.863 19:30:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.863 19:30:51 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:25.863 19:30:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:25.863 19:30:51 -- common/autotest_common.sh@10 -- # set +x 00:16:26.122 [2024-04-24 19:30:51.571757] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:16:26.122 [2024-04-24 19:30:51.571919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76590 ] 00:16:26.122 [2024-04-24 19:30:51.745462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:26.380 [2024-04-24 19:30:51.996993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.380 [2024-04-24 19:30:51.997025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.758 19:30:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:27.758 19:30:53 -- common/autotest_common.sh@850 -- # return 0 00:16:27.758 19:30:53 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:27.758 19:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.758 19:30:53 -- common/autotest_common.sh@10 -- # set +x 00:16:27.758 [2024-04-24 19:30:53.145266] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:27.758 19:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.758 19:30:53 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:27.758 19:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.758 19:30:53 -- common/autotest_common.sh@10 -- # set +x 00:16:27.758 malloc0 00:16:27.758 19:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.758 19:30:53 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:27.758 19:30:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:27.758 19:30:53 -- common/autotest_common.sh@10 -- # set +x 00:16:27.758 [2024-04-24 19:30:53.349864] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:27.758 [2024-04-24 19:30:53.350012] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:27.758 [2024-04-24 19:30:53.350029] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:27.758 [2024-04-24 19:30:53.350040] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:27.758 [2024-04-24 19:30:53.357897] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:27.758 [2024-04-24 19:30:53.357955] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:27.758 [2024-04-24 19:30:53.365676] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:27.758 [2024-04-24 19:30:53.365896] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:27.758 [2024-04-24 19:30:53.379686] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:27.758 1 00:16:27.758 19:30:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:27.758 19:30:53 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:29.136 19:30:54 -- ublk/ublk_recovery.sh@31 -- # fio_proc=76631 00:16:29.136 19:30:54 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:29.136 19:30:54 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:29.136 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:29.136 fio-3.35 00:16:29.136 Starting 1 process 00:16:34.477 19:30:59 -- ublk/ublk_recovery.sh@36 -- # kill -9 76590 00:16:34.477 19:30:59 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:39.769 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76590 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:39.769 19:31:04 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:39.769 19:31:04 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76742 00:16:39.769 19:31:04 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:39.769 19:31:04 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76742 00:16:39.769 19:31:04 -- common/autotest_common.sh@817 -- # '[' -z 76742 ']' 00:16:39.769 19:31:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.769 19:31:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:39.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.769 19:31:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.769 19:31:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:39.769 19:31:04 -- common/autotest_common.sh@10 -- # set +x 00:16:39.769 [2024-04-24 19:31:04.518743] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:16:39.769 [2024-04-24 19:31:04.518930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76742 ] 00:16:39.769 [2024-04-24 19:31:04.732978] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:39.769 [2024-04-24 19:31:05.015502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:39.769 [2024-04-24 19:31:05.015534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.717 19:31:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:40.717 19:31:06 -- common/autotest_common.sh@850 -- # return 0 00:16:40.718 19:31:06 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:40.718 19:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.718 19:31:06 -- common/autotest_common.sh@10 -- # set +x 00:16:40.718 [2024-04-24 19:31:06.142172] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:40.718 19:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.718 19:31:06 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:40.718 19:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.718 19:31:06 -- common/autotest_common.sh@10 -- # set +x 00:16:40.718 malloc0 00:16:40.718 19:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.718 19:31:06 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:40.718 19:31:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.718 19:31:06 -- common/autotest_common.sh@10 -- # set +x 00:16:40.718 [2024-04-24 19:31:06.353865] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:40.718 [2024-04-24 19:31:06.353921] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:40.718 [2024-04-24 19:31:06.353931] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:40.718 [2024-04-24 19:31:06.361727] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:40.718 [2024-04-24 19:31:06.361765] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:40.718 [2024-04-24 19:31:06.361865] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:40.718 1 00:16:40.718 19:31:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.718 19:31:06 -- ublk/ublk_recovery.sh@52 -- # wait 76631 00:16:40.718 [2024-04-24 19:31:06.369727] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:40.718 [2024-04-24 19:31:06.375432] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:40.718 [2024-04-24 19:31:06.380957] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:40.718 [2024-04-24 19:31:06.381003] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:36.977 00:17:36.977 fio_test: (groupid=0, jobs=1): err= 0: pid=76634: Wed Apr 24 19:31:54 2024 00:17:36.977 read: IOPS=18.4k, BW=71.9MiB/s (75.4MB/s)(4315MiB/60003msec) 00:17:36.977 slat (nsec): min=1207, max=1706.4k, avg=8231.79, stdev=4260.91 00:17:36.977 clat (usec): min=665, max=7002.3k, avg=3473.70, stdev=56502.65 00:17:36.977 lat (usec): min=683, max=7002.3k, avg=3481.93, stdev=56502.65 00:17:36.977 clat percentiles (usec): 00:17:36.977 | 1.00th=[ 2180], 5.00th=[ 2409], 10.00th=[ 2507], 20.00th=[ 2638], 00:17:36.977 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2835], 00:17:36.977 | 70.00th=[ 2966], 80.00th=[ 3326], 90.00th=[ 3621], 95.00th=[ 4359], 00:17:36.977 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[ 9241], 99.95th=[13173], 00:17:36.977 | 99.99th=[17957] 00:17:36.977 bw ( KiB/s): min=18720, max=103744, per=100.00%, avg=82763.27, stdev=12722.79, samples=106 00:17:36.977 iops : min= 4680, max=25936, avg=20690.78, stdev=3180.71, samples=106 00:17:36.977 write: IOPS=18.4k, BW=71.9MiB/s (75.4MB/s)(4313MiB/60003msec); 0 zone resets 00:17:36.977 slat (nsec): min=1228, max=1282.3k, avg=8347.09, stdev=4038.73 00:17:36.978 clat (usec): min=660, max=7002.6k, avg=3462.18, stdev=49840.76 00:17:36.978 lat (usec): min=665, max=7002.6k, avg=3470.52, stdev=49840.77 00:17:36.978 clat percentiles (usec): 00:17:36.978 | 1.00th=[ 2212], 5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2704], 00:17:36.978 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2966], 00:17:36.978 | 70.00th=[ 3064], 80.00th=[ 3425], 90.00th=[ 3720], 95.00th=[ 4359], 00:17:36.978 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 9110], 99.95th=[13304], 00:17:36.978 | 99.99th=[17957] 00:17:36.978 bw ( KiB/s): min=18328, max=103168, per=100.00%, avg=82730.54, stdev=12582.56, samples=106 00:17:36.978 iops : min= 4582, max=25792, avg=20682.59, stdev=3145.63, samples=106 00:17:36.978 lat (usec) : 750=0.01%, 1000=0.01% 00:17:36.978 lat (msec) : 2=0.29%, 4=92.79%, 10=6.83%, 20=0.08%, >=2000=0.01% 00:17:36.978 cpu : usr=9.01%, sys=30.53%, ctx=89972, majf=0, minf=13 00:17:36.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:36.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:36.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:36.978 issued rwts: total=1104548,1104163,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:36.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:36.978 00:17:36.978 Run status group 0 (all jobs): 00:17:36.978 READ: bw=71.9MiB/s (75.4MB/s), 71.9MiB/s-71.9MiB/s (75.4MB/s-75.4MB/s), io=4315MiB (4524MB), run=60003-60003msec 00:17:36.978 WRITE: bw=71.9MiB/s (75.4MB/s), 71.9MiB/s-71.9MiB/s (75.4MB/s-75.4MB/s), io=4313MiB (4523MB), run=60003-60003msec 00:17:36.978 00:17:36.978 Disk stats (read/write): 00:17:36.978 ublkb1: ios=1102579/1102169, merge=0/0, ticks=3723645/3587180, in_queue=7310825, util=99.93% 00:17:36.978 19:31:54 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:36.978 19:31:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.978 19:31:54 -- common/autotest_common.sh@10 -- # set +x 00:17:36.978 [2024-04-24 19:31:54.650818] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:36.978 [2024-04-24 19:31:54.690928] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:36.978 [2024-04-24 19:31:54.691244] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:36.978 [2024-04-24 19:31:54.697712] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:36.978 [2024-04-24 19:31:54.697901] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:36.978 [2024-04-24 19:31:54.697920] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:36.978 19:31:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.978 19:31:54 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:36.978 19:31:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:36.978 19:31:54 -- common/autotest_common.sh@10 -- # set +x 00:17:36.978 [2024-04-24 19:31:54.711832] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:17:36.978 [2024-04-24 19:31:54.721035] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:17:36.978 [2024-04-24 19:31:54.721098] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:36.978 19:31:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:36.978 19:31:54 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:36.978 19:31:54 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:36.978 19:31:54 -- ublk/ublk_recovery.sh@14 -- # killprocess 76742 00:17:36.978 19:31:54 -- common/autotest_common.sh@936 -- # '[' -z 76742 ']' 00:17:36.978 19:31:54 -- common/autotest_common.sh@940 -- # kill -0 76742 00:17:36.978 19:31:54 -- common/autotest_common.sh@941 -- # uname 00:17:36.978 19:31:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:36.978 19:31:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76742 00:17:36.978 19:31:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:36.978 19:31:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:36.978 19:31:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76742' 00:17:36.978 killing process with pid 76742 00:17:36.978 19:31:54 -- common/autotest_common.sh@955 -- # kill 76742 00:17:36.978 19:31:54 -- common/autotest_common.sh@960 -- # wait 76742 00:17:36.978 [2024-04-24 19:31:56.095730] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:17:36.978 [2024-04-24 19:31:56.095835] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:17:36.978 00:17:36.978 real 1m6.465s 00:17:36.978 user 1m49.564s 00:17:36.978 sys 0m35.662s 00:17:36.978 19:31:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:36.978 19:31:57 -- common/autotest_common.sh@10 -- # set +x 00:17:36.978 ************************************ 00:17:36.978 END TEST ublk_recovery 00:17:36.978 ************************************ 00:17:36.978 19:31:57 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:17:36.978 19:31:57 -- spdk/autotest.sh@258 -- # timing_exit lib 00:17:36.978 19:31:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:36.978 19:31:57 -- common/autotest_common.sh@10 -- # set +x 00:17:36.978 19:31:57 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:17:36.978 19:31:57 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:17:36.978 19:31:57 -- spdk/autotest.sh@277 -- # '[' 0 -eq 1 ']' 00:17:36.978 19:31:57 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:17:36.978 19:31:57 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:17:36.978 19:31:57 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:17:36.978 19:31:57 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:36.978 19:31:57 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:17:36.978 19:31:57 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:36.978 19:31:57 -- spdk/autotest.sh@337 -- # '[' 1 -eq 1 ']' 00:17:36.978 19:31:57 -- spdk/autotest.sh@338 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:36.978 19:31:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:36.978 19:31:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:36.978 19:31:57 -- common/autotest_common.sh@10 -- # set +x 00:17:36.978 ************************************ 00:17:36.978 START TEST ftl 00:17:36.978 ************************************ 00:17:36.978 19:31:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:36.978 * Looking for test storage... 00:17:36.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:36.978 19:31:58 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:36.978 19:31:58 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:36.978 19:31:58 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:36.978 19:31:58 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:36.978 19:31:58 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:36.978 19:31:58 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:36.978 19:31:58 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.978 19:31:58 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:36.978 19:31:58 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:36.978 19:31:58 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.978 19:31:58 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.978 19:31:58 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:36.978 19:31:58 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:36.978 19:31:58 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:36.978 19:31:58 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:36.978 19:31:58 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:36.978 19:31:58 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:36.978 19:31:58 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.978 19:31:58 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:36.978 19:31:58 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:36.978 19:31:58 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:36.978 19:31:58 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:36.978 19:31:58 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:36.978 19:31:58 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:36.978 19:31:58 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:36.978 19:31:58 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:36.978 19:31:58 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:36.978 19:31:58 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:36.978 19:31:58 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:36.978 19:31:58 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:36.978 19:31:58 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:36.978 19:31:58 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:36.978 19:31:58 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:36.978 19:31:58 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:36.978 19:31:58 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:36.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:36.979 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:36.979 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:36.979 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:36.979 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:36.979 19:31:58 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:36.979 19:31:58 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77539 00:17:36.979 19:31:58 -- ftl/ftl.sh@38 -- # waitforlisten 77539 00:17:36.979 19:31:58 -- common/autotest_common.sh@817 -- # '[' -z 77539 ']' 00:17:36.979 19:31:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.979 19:31:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:36.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.979 19:31:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.979 19:31:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:36.979 19:31:58 -- common/autotest_common.sh@10 -- # set +x 00:17:36.979 [2024-04-24 19:31:58.994487] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:17:36.979 [2024-04-24 19:31:58.994662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77539 ] 00:17:36.979 [2024-04-24 19:31:59.174376] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.979 [2024-04-24 19:31:59.449775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.979 19:31:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:36.979 19:31:59 -- common/autotest_common.sh@850 -- # return 0 00:17:36.979 19:31:59 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:36.979 19:32:00 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:36.979 19:32:01 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:36.979 19:32:01 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:36.979 19:32:01 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:36.979 19:32:01 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:36.979 19:32:01 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:36.979 19:32:02 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:36.979 19:32:02 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:36.979 19:32:02 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:36.979 19:32:02 -- ftl/ftl.sh@50 -- # break 00:17:36.979 19:32:02 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:36.979 19:32:02 -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:36.979 19:32:02 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:36.979 19:32:02 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:36.979 19:32:02 -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:36.979 19:32:02 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:36.979 19:32:02 -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:36.979 19:32:02 -- ftl/ftl.sh@63 -- # break 00:17:36.979 19:32:02 -- ftl/ftl.sh@66 -- # killprocess 77539 00:17:36.979 19:32:02 -- common/autotest_common.sh@936 -- # '[' -z 77539 ']' 00:17:36.979 19:32:02 -- common/autotest_common.sh@940 -- # kill -0 77539 00:17:36.979 19:32:02 -- common/autotest_common.sh@941 -- # uname 00:17:36.979 19:32:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:36.979 19:32:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77539 00:17:36.979 19:32:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:36.979 19:32:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:36.979 killing process with pid 77539 00:17:36.979 19:32:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77539' 00:17:36.979 19:32:02 -- common/autotest_common.sh@955 -- # kill 77539 00:17:36.979 19:32:02 -- common/autotest_common.sh@960 -- # wait 77539 00:17:39.512 19:32:05 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:39.513 19:32:05 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:17:39.513 19:32:05 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:39.513 19:32:05 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:39.513 19:32:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:39.513 19:32:05 -- common/autotest_common.sh@10 -- # set +x 00:17:39.513 ************************************ 00:17:39.513 START TEST ftl_fio_basic 00:17:39.513 ************************************ 00:17:39.513 19:32:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:39.772 * Looking for test storage... 00:17:39.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:39.772 19:32:05 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:39.772 19:32:05 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:39.772 19:32:05 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:39.772 19:32:05 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:39.772 19:32:05 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:39.772 19:32:05 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:39.772 19:32:05 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.772 19:32:05 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:39.772 19:32:05 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:39.772 19:32:05 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:39.772 19:32:05 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:39.772 19:32:05 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:39.772 19:32:05 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:39.772 19:32:05 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:39.772 19:32:05 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:39.772 19:32:05 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:39.772 19:32:05 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:39.772 19:32:05 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:39.773 19:32:05 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:39.773 19:32:05 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:39.773 19:32:05 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:39.773 19:32:05 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:39.773 19:32:05 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:39.773 19:32:05 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:39.773 19:32:05 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:39.773 19:32:05 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:39.773 19:32:05 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:39.773 19:32:05 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:39.773 19:32:05 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:39.773 19:32:05 -- ftl/fio.sh@11 -- # declare -A suite 00:17:39.773 19:32:05 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:39.773 19:32:05 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:39.773 19:32:05 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:39.773 19:32:05 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.773 19:32:05 -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:39.773 19:32:05 -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:39.773 19:32:05 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:39.773 19:32:05 -- ftl/fio.sh@26 -- # uuid= 00:17:39.773 19:32:05 -- ftl/fio.sh@27 -- # timeout=240 00:17:39.773 19:32:05 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:39.773 19:32:05 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:39.773 19:32:05 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:39.773 19:32:05 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:39.773 19:32:05 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:39.773 19:32:05 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:39.773 19:32:05 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:39.773 19:32:05 -- ftl/fio.sh@45 -- # svcpid=77690 00:17:39.773 19:32:05 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:39.773 19:32:05 -- ftl/fio.sh@46 -- # waitforlisten 77690 00:17:39.773 19:32:05 -- common/autotest_common.sh@817 -- # '[' -z 77690 ']' 00:17:39.773 19:32:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.773 19:32:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:39.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.773 19:32:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.773 19:32:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:39.773 19:32:05 -- common/autotest_common.sh@10 -- # set +x 00:17:39.773 [2024-04-24 19:32:05.349713] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:17:39.773 [2024-04-24 19:32:05.350248] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77690 ] 00:17:40.032 [2024-04-24 19:32:05.519379] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.291 [2024-04-24 19:32:05.797926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.291 [2024-04-24 19:32:05.798074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.291 [2024-04-24 19:32:05.798113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.227 19:32:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:41.227 19:32:06 -- common/autotest_common.sh@850 -- # return 0 00:17:41.227 19:32:06 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:41.227 19:32:06 -- ftl/common.sh@54 -- # local name=nvme0 00:17:41.227 19:32:06 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:41.227 19:32:06 -- ftl/common.sh@56 -- # local size=103424 00:17:41.227 19:32:06 -- ftl/common.sh@59 -- # local base_bdev 00:17:41.227 19:32:06 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:41.486 19:32:07 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:41.486 19:32:07 -- ftl/common.sh@62 -- # local base_size 00:17:41.486 19:32:07 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:41.486 19:32:07 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:17:41.486 19:32:07 -- common/autotest_common.sh@1365 -- # local bdev_info 00:17:41.486 19:32:07 -- common/autotest_common.sh@1366 -- # local bs 00:17:41.486 19:32:07 -- common/autotest_common.sh@1367 -- # local nb 00:17:41.486 19:32:07 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:41.745 19:32:07 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:17:41.745 { 00:17:41.745 "name": "nvme0n1", 00:17:41.745 "aliases": [ 00:17:41.745 "f1f163a3-5aa5-417a-b95a-2fd7b352a71d" 00:17:41.745 ], 00:17:41.745 "product_name": "NVMe disk", 00:17:41.745 "block_size": 4096, 00:17:41.745 "num_blocks": 1310720, 00:17:41.745 "uuid": "f1f163a3-5aa5-417a-b95a-2fd7b352a71d", 00:17:41.745 "assigned_rate_limits": { 00:17:41.745 "rw_ios_per_sec": 0, 00:17:41.745 "rw_mbytes_per_sec": 0, 00:17:41.745 "r_mbytes_per_sec": 0, 00:17:41.745 "w_mbytes_per_sec": 0 00:17:41.745 }, 00:17:41.745 "claimed": false, 00:17:41.745 "zoned": false, 00:17:41.745 "supported_io_types": { 00:17:41.745 "read": true, 00:17:41.745 "write": true, 00:17:41.745 "unmap": true, 00:17:41.745 "write_zeroes": true, 00:17:41.745 "flush": true, 00:17:41.745 "reset": true, 00:17:41.745 "compare": true, 00:17:41.745 "compare_and_write": false, 00:17:41.745 "abort": true, 00:17:41.745 "nvme_admin": true, 00:17:41.745 "nvme_io": true 00:17:41.745 }, 00:17:41.745 "driver_specific": { 00:17:41.745 "nvme": [ 00:17:41.745 { 00:17:41.745 "pci_address": "0000:00:11.0", 00:17:41.745 "trid": { 00:17:41.745 "trtype": "PCIe", 00:17:41.745 "traddr": "0000:00:11.0" 00:17:41.745 }, 00:17:41.745 "ctrlr_data": { 00:17:41.745 "cntlid": 0, 00:17:41.745 "vendor_id": "0x1b36", 00:17:41.745 "model_number": "QEMU NVMe Ctrl", 00:17:41.745 "serial_number": "12341", 00:17:41.745 "firmware_revision": "8.0.0", 00:17:41.745 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:41.745 "oacs": { 00:17:41.745 "security": 0, 00:17:41.745 "format": 1, 00:17:41.745 "firmware": 0, 00:17:41.745 "ns_manage": 1 00:17:41.745 }, 00:17:41.745 "multi_ctrlr": false, 00:17:41.745 "ana_reporting": false 00:17:41.745 }, 00:17:41.745 "vs": { 00:17:41.745 "nvme_version": "1.4" 00:17:41.745 }, 00:17:41.745 "ns_data": { 00:17:41.745 "id": 1, 00:17:41.745 "can_share": false 00:17:41.745 } 00:17:41.745 } 00:17:41.745 ], 00:17:41.745 "mp_policy": "active_passive" 00:17:41.745 } 00:17:41.745 } 00:17:41.745 ]' 00:17:41.745 19:32:07 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:17:42.004 19:32:07 -- common/autotest_common.sh@1369 -- # bs=4096 00:17:42.004 19:32:07 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:17:42.004 19:32:07 -- common/autotest_common.sh@1370 -- # nb=1310720 00:17:42.004 19:32:07 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:17:42.004 19:32:07 -- common/autotest_common.sh@1374 -- # echo 5120 00:17:42.004 19:32:07 -- ftl/common.sh@63 -- # base_size=5120 00:17:42.004 19:32:07 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:42.004 19:32:07 -- ftl/common.sh@67 -- # clear_lvols 00:17:42.004 19:32:07 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:42.004 19:32:07 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:42.404 19:32:07 -- ftl/common.sh@28 -- # stores= 00:17:42.404 19:32:07 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:42.404 19:32:07 -- ftl/common.sh@68 -- # lvs=6bd73d18-1e55-4934-b11b-3e3b0ca2d939 00:17:42.404 19:32:07 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6bd73d18-1e55-4934-b11b-3e3b0ca2d939 00:17:42.663 19:32:08 -- ftl/fio.sh@48 -- # split_bdev=1e73d1ee-6585-493d-81af-3c5f1b4dd2f8 00:17:42.663 19:32:08 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1e73d1ee-6585-493d-81af-3c5f1b4dd2f8 00:17:42.663 19:32:08 -- ftl/common.sh@35 -- # local name=nvc0 00:17:42.663 19:32:08 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:42.663 19:32:08 -- ftl/common.sh@37 -- # local base_bdev=1e73d1ee-6585-493d-81af-3c5f1b4dd2f8 00:17:42.663 19:32:08 -- ftl/common.sh@38 -- # local cache_size= 00:17:42.663 19:32:08 -- ftl/common.sh@41 -- # get_bdev_size 1e73d1ee-6585-493d-81af-3c5f1b4dd2f8 00:17:42.663 19:32:08 -- common/autotest_common.sh@1364 -- # local bdev_name=1e73d1ee-6585-493d-81af-3c5f1b4dd2f8 00:17:42.663 19:32:08 -- common/autotest_common.sh@1365 -- # local bdev_info 00:17:42.663 19:32:08 -- common/autotest_common.sh@1366 -- # local bs 00:17:42.663 19:32:08 -- common/autotest_common.sh@1367 -- # local nb 00:17:42.663 19:32:08 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e73d1ee-6585-493d-81af-3c5f1b4dd2f8 00:17:42.923 19:32:08 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:17:42.923 { 00:17:42.923 "name": "1e73d1ee-6585-493d-81af-3c5f1b4dd2f8", 00:17:42.923 "aliases": [ 00:17:42.923 "lvs/nvme0n1p0" 00:17:42.923 ], 00:17:42.923 "product_name": "Logical Volume", 00:17:42.923 "block_size": 4096, 00:17:42.923 "num_blocks": 26476544, 00:17:42.923 "uuid": "1e73d1ee-6585-493d-81af-3c5f1b4dd2f8", 00:17:42.923 "assigned_rate_limits": { 00:17:42.923 "rw_ios_per_sec": 0, 00:17:42.923 "rw_mbytes_per_sec": 0, 00:17:42.923 "r_mbytes_per_sec": 0, 00:17:42.923 "w_mbytes_per_sec": 0 00:17:42.923 }, 00:17:42.923 "claimed": false, 00:17:42.923 "zoned": false, 00:17:42.923 "supported_io_types": { 00:17:42.923 "read": true, 00:17:42.923 "write": true, 00:17:42.923 "unmap": true, 00:17:42.923 "write_zeroes": true, 00:17:42.923 "flush": false, 00:17:42.923 "reset": true, 00:17:42.923 "compare": false, 00:17:42.923 "compare_and_write": false, 00:17:42.923 "abort": false, 00:17:42.923 "nvme_admin": false, 00:17:42.923 "nvme_io": false 00:17:42.923 }, 00:17:42.923 "driver_specific": { 00:17:42.923 "lvol": { 00:17:42.923 "lvol_store_uuid": "6bd73d18-1e55-4934-b11b-3e3b0ca2d939", 00:17:42.923 "base_bdev": "nvme0n1", 00:17:42.923 "thin_provision": true, 00:17:42.923 "snapshot": false, 00:17:42.923 "clone": false, 00:17:42.923 "esnap_clone": false 00:17:42.923 } 00:17:42.923 } 00:17:42.923 } 00:17:42.923 ]' 00:17:42.923 19:32:08 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:17:42.923 19:32:08 -- common/autotest_common.sh@1369 -- # bs=4096 00:17:42.923 19:32:08 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:17:42.923 19:32:08 -- common/autotest_common.sh@1370 -- # nb=26476544 00:17:42.923 19:32:08 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:17:42.923 19:32:08 -- common/autotest_common.sh@1374 -- # echo 103424 00:17:42.923 19:32:08 -- ftl/common.sh@41 -- # local base_size=5171 00:17:42.923 19:32:08 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:42.923 19:32:08 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:43.184 19:32:08 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:43.184 19:32:08 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:43.184 19:32:08 -- ftl/common.sh@48 -- # get_bdev_size 1e73d1ee-6585-493d-81af-3c5f1b4dd2f8 00:17:43.184 19:32:08 -- common/autotest_common.sh@1364 -- # local bdev_name=1e73d1ee-6585-493d-81af-3c5f1b4dd2f8 00:17:43.184 19:32:08 -- common/autotest_common.sh@1365 -- # local bdev_info 00:17:43.184 19:32:08 -- common/autotest_common.sh@1366 -- # local bs 00:17:43.184 19:32:08 -- common/autotest_common.sh@1367 -- # local nb 00:17:43.184 19:32:08 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e73d1ee-6585-493d-81af-3c5f1b4dd2f8 00:17:43.445 19:32:08 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:17:43.445 { 00:17:43.445 "name": "1e73d1ee-6585-493d-81af-3c5f1b4dd2f8", 00:17:43.445 "aliases": [ 00:17:43.445 "lvs/nvme0n1p0" 00:17:43.445 ], 00:17:43.445 "product_name": "Logical Volume", 00:17:43.445 "block_size": 4096, 00:17:43.445 "num_blocks": 26476544, 00:17:43.445 "uuid": "1e73d1ee-6585-493d-81af-3c5f1b4dd2f8", 00:17:43.445 "assigned_rate_limits": { 00:17:43.445 "rw_ios_per_sec": 0, 00:17:43.445 "rw_mbytes_per_sec": 0, 00:17:43.445 "r_mbytes_per_sec": 0, 00:17:43.445 "w_mbytes_per_sec": 0 00:17:43.445 }, 00:17:43.445 "claimed": false, 00:17:43.445 "zoned": false, 00:17:43.445 "supported_io_types": { 00:17:43.445 "read": true, 00:17:43.445 "write": true, 00:17:43.445 "unmap": true, 00:17:43.445 "write_zeroes": true, 00:17:43.445 "flush": false, 00:17:43.445 "reset": true, 00:17:43.445 "compare": false, 00:17:43.445 "compare_and_write": false, 00:17:43.445 "abort": false, 00:17:43.445 "nvme_admin": false, 00:17:43.445 "nvme_io": false 00:17:43.445 }, 00:17:43.445 "driver_specific": { 00:17:43.445 "lvol": { 00:17:43.445 "lvol_store_uuid": "6bd73d18-1e55-4934-b11b-3e3b0ca2d939", 00:17:43.445 "base_bdev": "nvme0n1", 00:17:43.446 "thin_provision": true, 00:17:43.446 "snapshot": false, 00:17:43.446 "clone": false, 00:17:43.446 "esnap_clone": false 00:17:43.446 } 00:17:43.446 } 00:17:43.446 } 00:17:43.446 ]' 00:17:43.446 19:32:08 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:17:43.446 19:32:09 -- common/autotest_common.sh@1369 -- # bs=4096 00:17:43.446 19:32:09 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:17:43.446 19:32:09 -- common/autotest_common.sh@1370 -- # nb=26476544 00:17:43.446 19:32:09 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:17:43.446 19:32:09 -- common/autotest_common.sh@1374 -- # echo 103424 00:17:43.446 19:32:09 -- ftl/common.sh@48 -- # cache_size=5171 00:17:43.446 19:32:09 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:43.708 19:32:09 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:43.708 19:32:09 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:43.708 19:32:09 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:43.708 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:43.708 19:32:09 -- ftl/fio.sh@56 -- # get_bdev_size 1e73d1ee-6585-493d-81af-3c5f1b4dd2f8 00:17:43.708 19:32:09 -- common/autotest_common.sh@1364 -- # local bdev_name=1e73d1ee-6585-493d-81af-3c5f1b4dd2f8 00:17:43.708 19:32:09 -- common/autotest_common.sh@1365 -- # local bdev_info 00:17:43.708 19:32:09 -- common/autotest_common.sh@1366 -- # local bs 00:17:43.708 19:32:09 -- common/autotest_common.sh@1367 -- # local nb 00:17:43.708 19:32:09 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1e73d1ee-6585-493d-81af-3c5f1b4dd2f8 00:17:43.973 19:32:09 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:17:43.973 { 00:17:43.973 "name": "1e73d1ee-6585-493d-81af-3c5f1b4dd2f8", 00:17:43.973 "aliases": [ 00:17:43.973 "lvs/nvme0n1p0" 00:17:43.973 ], 00:17:43.973 "product_name": "Logical Volume", 00:17:43.973 "block_size": 4096, 00:17:43.973 "num_blocks": 26476544, 00:17:43.973 "uuid": "1e73d1ee-6585-493d-81af-3c5f1b4dd2f8", 00:17:43.973 "assigned_rate_limits": { 00:17:43.973 "rw_ios_per_sec": 0, 00:17:43.973 "rw_mbytes_per_sec": 0, 00:17:43.973 "r_mbytes_per_sec": 0, 00:17:43.973 "w_mbytes_per_sec": 0 00:17:43.973 }, 00:17:43.973 "claimed": false, 00:17:43.973 "zoned": false, 00:17:43.973 "supported_io_types": { 00:17:43.973 "read": true, 00:17:43.973 "write": true, 00:17:43.973 "unmap": true, 00:17:43.973 "write_zeroes": true, 00:17:43.973 "flush": false, 00:17:43.973 "reset": true, 00:17:43.973 "compare": false, 00:17:43.973 "compare_and_write": false, 00:17:43.973 "abort": false, 00:17:43.973 "nvme_admin": false, 00:17:43.973 "nvme_io": false 00:17:43.973 }, 00:17:43.973 "driver_specific": { 00:17:43.973 "lvol": { 00:17:43.973 "lvol_store_uuid": "6bd73d18-1e55-4934-b11b-3e3b0ca2d939", 00:17:43.973 "base_bdev": "nvme0n1", 00:17:43.973 "thin_provision": true, 00:17:43.973 "snapshot": false, 00:17:43.973 "clone": false, 00:17:43.973 "esnap_clone": false 00:17:43.973 } 00:17:43.973 } 00:17:43.973 } 00:17:43.973 ]' 00:17:43.973 19:32:09 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:17:43.973 19:32:09 -- common/autotest_common.sh@1369 -- # bs=4096 00:17:43.973 19:32:09 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:17:43.973 19:32:09 -- common/autotest_common.sh@1370 -- # nb=26476544 00:17:43.973 19:32:09 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:17:43.973 19:32:09 -- common/autotest_common.sh@1374 -- # echo 103424 00:17:43.973 19:32:09 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:43.973 19:32:09 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:43.973 19:32:09 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1e73d1ee-6585-493d-81af-3c5f1b4dd2f8 -c nvc0n1p0 --l2p_dram_limit 60 00:17:44.241 [2024-04-24 19:32:09.810936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.241 [2024-04-24 19:32:09.811011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:44.241 [2024-04-24 19:32:09.811029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:44.241 [2024-04-24 19:32:09.811041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.241 [2024-04-24 19:32:09.811130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.241 [2024-04-24 19:32:09.811145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:44.241 [2024-04-24 19:32:09.811158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:44.241 [2024-04-24 19:32:09.811168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.241 [2024-04-24 19:32:09.811205] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:44.241 [2024-04-24 19:32:09.812584] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:44.241 [2024-04-24 19:32:09.812623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.241 [2024-04-24 19:32:09.812643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:44.241 [2024-04-24 19:32:09.812658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:17:44.241 [2024-04-24 19:32:09.812667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.241 [2024-04-24 19:32:09.812765] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 11dd4f3e-3fdc-4dd0-89a9-8390ef8740f8 00:17:44.241 [2024-04-24 19:32:09.814249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.242 [2024-04-24 19:32:09.814286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:44.242 [2024-04-24 19:32:09.814299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:44.242 [2024-04-24 19:32:09.814309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.242 [2024-04-24 19:32:09.822028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.242 [2024-04-24 19:32:09.822072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:44.242 [2024-04-24 19:32:09.822084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.641 ms 00:17:44.242 [2024-04-24 19:32:09.822094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.242 [2024-04-24 19:32:09.822217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.242 [2024-04-24 19:32:09.822245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:44.242 [2024-04-24 19:32:09.822255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:17:44.242 [2024-04-24 19:32:09.822266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.242 [2024-04-24 19:32:09.822358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.242 [2024-04-24 19:32:09.822381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:44.242 [2024-04-24 19:32:09.822391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:44.242 [2024-04-24 19:32:09.822401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.242 [2024-04-24 19:32:09.822440] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:44.242 [2024-04-24 19:32:09.829468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.242 [2024-04-24 19:32:09.829519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:44.242 [2024-04-24 19:32:09.829534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.045 ms 00:17:44.242 [2024-04-24 19:32:09.829543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.242 [2024-04-24 19:32:09.829611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.242 [2024-04-24 19:32:09.829626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:44.242 [2024-04-24 19:32:09.829647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:44.242 [2024-04-24 19:32:09.829674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.242 [2024-04-24 19:32:09.829758] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:44.242 [2024-04-24 19:32:09.829895] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:44.242 [2024-04-24 19:32:09.829917] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:44.242 [2024-04-24 19:32:09.829930] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:44.242 [2024-04-24 19:32:09.829946] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:44.242 [2024-04-24 19:32:09.829956] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:44.242 [2024-04-24 19:32:09.829970] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:44.242 [2024-04-24 19:32:09.829979] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:44.242 [2024-04-24 19:32:09.829989] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:44.242 [2024-04-24 19:32:09.829998] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:44.242 [2024-04-24 19:32:09.830010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.242 [2024-04-24 19:32:09.830019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:44.242 [2024-04-24 19:32:09.830030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:44.242 [2024-04-24 19:32:09.830038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.242 [2024-04-24 19:32:09.830118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.242 [2024-04-24 19:32:09.830134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:44.242 [2024-04-24 19:32:09.830144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:44.242 [2024-04-24 19:32:09.830155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.242 [2024-04-24 19:32:09.830256] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:44.242 [2024-04-24 19:32:09.830270] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:44.242 [2024-04-24 19:32:09.830286] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.242 [2024-04-24 19:32:09.830296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.242 [2024-04-24 19:32:09.830306] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:44.242 [2024-04-24 19:32:09.830314] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:44.242 [2024-04-24 19:32:09.830324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:44.242 [2024-04-24 19:32:09.830332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:44.242 [2024-04-24 19:32:09.830342] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:44.242 [2024-04-24 19:32:09.830351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.242 [2024-04-24 19:32:09.830365] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:44.242 [2024-04-24 19:32:09.830373] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:44.242 [2024-04-24 19:32:09.830383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:44.242 [2024-04-24 19:32:09.830391] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:44.242 [2024-04-24 19:32:09.830402] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:44.242 [2024-04-24 19:32:09.830410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.242 [2024-04-24 19:32:09.830419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:44.242 [2024-04-24 19:32:09.830427] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:44.242 [2024-04-24 19:32:09.830450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.242 [2024-04-24 19:32:09.830460] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:44.242 [2024-04-24 19:32:09.830470] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:44.242 [2024-04-24 19:32:09.830478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:44.242 [2024-04-24 19:32:09.830487] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:44.242 [2024-04-24 19:32:09.830495] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:44.242 [2024-04-24 19:32:09.830504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:44.242 [2024-04-24 19:32:09.830511] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:44.242 [2024-04-24 19:32:09.830520] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:44.242 [2024-04-24 19:32:09.830527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:44.242 [2024-04-24 19:32:09.830536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:44.242 [2024-04-24 19:32:09.830543] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:44.242 [2024-04-24 19:32:09.830552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:44.242 [2024-04-24 19:32:09.830559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:44.242 [2024-04-24 19:32:09.830568] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:44.242 [2024-04-24 19:32:09.830574] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:44.242 [2024-04-24 19:32:09.830585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:44.242 [2024-04-24 19:32:09.830593] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:44.242 [2024-04-24 19:32:09.830627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.242 [2024-04-24 19:32:09.830645] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:44.242 [2024-04-24 19:32:09.830656] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:44.242 [2024-04-24 19:32:09.830664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:44.242 [2024-04-24 19:32:09.830673] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:44.242 [2024-04-24 19:32:09.830682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:44.242 [2024-04-24 19:32:09.830693] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:44.242 [2024-04-24 19:32:09.830701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:44.242 [2024-04-24 19:32:09.830713] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:44.242 [2024-04-24 19:32:09.830721] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:44.242 [2024-04-24 19:32:09.830730] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:44.242 [2024-04-24 19:32:09.830738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:44.242 [2024-04-24 19:32:09.830747] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:44.242 [2024-04-24 19:32:09.830754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:44.242 [2024-04-24 19:32:09.830773] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:44.242 [2024-04-24 19:32:09.830783] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.242 [2024-04-24 19:32:09.830795] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:44.242 [2024-04-24 19:32:09.830803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:44.242 [2024-04-24 19:32:09.830813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:44.243 [2024-04-24 19:32:09.830821] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:44.243 [2024-04-24 19:32:09.830831] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:44.243 [2024-04-24 19:32:09.830839] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:44.243 [2024-04-24 19:32:09.830849] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:44.243 [2024-04-24 19:32:09.830857] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:44.243 [2024-04-24 19:32:09.830867] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:44.243 [2024-04-24 19:32:09.830875] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:44.243 [2024-04-24 19:32:09.830885] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:44.243 [2024-04-24 19:32:09.830893] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:44.243 [2024-04-24 19:32:09.830905] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:44.243 [2024-04-24 19:32:09.830912] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:44.243 [2024-04-24 19:32:09.830925] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:44.243 [2024-04-24 19:32:09.830934] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:44.243 [2024-04-24 19:32:09.830944] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:44.243 [2024-04-24 19:32:09.830959] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:44.243 [2024-04-24 19:32:09.830969] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:44.243 [2024-04-24 19:32:09.830994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.243 [2024-04-24 19:32:09.831006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:44.243 [2024-04-24 19:32:09.831016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.769 ms 00:17:44.243 [2024-04-24 19:32:09.831027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.243 [2024-04-24 19:32:09.859470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.243 [2024-04-24 19:32:09.859530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:44.243 [2024-04-24 19:32:09.859546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.420 ms 00:17:44.243 [2024-04-24 19:32:09.859556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.243 [2024-04-24 19:32:09.859685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.243 [2024-04-24 19:32:09.859703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:44.243 [2024-04-24 19:32:09.859715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:44.243 [2024-04-24 19:32:09.859727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.512 [2024-04-24 19:32:09.922401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.512 [2024-04-24 19:32:09.922458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:44.512 [2024-04-24 19:32:09.922473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.715 ms 00:17:44.512 [2024-04-24 19:32:09.922484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.512 [2024-04-24 19:32:09.922542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.512 [2024-04-24 19:32:09.922558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:44.512 [2024-04-24 19:32:09.922568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:44.512 [2024-04-24 19:32:09.922577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.512 [2024-04-24 19:32:09.923106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.512 [2024-04-24 19:32:09.923130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:44.512 [2024-04-24 19:32:09.923140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:17:44.512 [2024-04-24 19:32:09.923153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.512 [2024-04-24 19:32:09.923285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.512 [2024-04-24 19:32:09.923307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:44.512 [2024-04-24 19:32:09.923318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:44.512 [2024-04-24 19:32:09.923331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.512 [2024-04-24 19:32:09.961944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.512 [2024-04-24 19:32:09.962000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:44.512 [2024-04-24 19:32:09.962014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.652 ms 00:17:44.512 [2024-04-24 19:32:09.962024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.512 [2024-04-24 19:32:09.979060] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:44.512 [2024-04-24 19:32:09.996683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.512 [2024-04-24 19:32:09.996743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:44.512 [2024-04-24 19:32:09.996759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.549 ms 00:17:44.512 [2024-04-24 19:32:09.996768] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.512 [2024-04-24 19:32:10.077955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:44.512 [2024-04-24 19:32:10.078021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:44.512 [2024-04-24 19:32:10.078039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.274 ms 00:17:44.512 [2024-04-24 19:32:10.078049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:44.512 [2024-04-24 19:32:10.078102] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:44.512 [2024-04-24 19:32:10.078113] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:49.828 [2024-04-24 19:32:14.469228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.828 [2024-04-24 19:32:14.469296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:49.828 [2024-04-24 19:32:14.469334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4399.577 ms 00:17:49.828 [2024-04-24 19:32:14.469344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.828 [2024-04-24 19:32:14.469575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.828 [2024-04-24 19:32:14.469593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:49.828 [2024-04-24 19:32:14.469609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:17:49.828 [2024-04-24 19:32:14.469617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.828 [2024-04-24 19:32:14.514792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.828 [2024-04-24 19:32:14.514856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:49.828 [2024-04-24 19:32:14.514872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.166 ms 00:17:49.828 [2024-04-24 19:32:14.514880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.828 [2024-04-24 19:32:14.557710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.828 [2024-04-24 19:32:14.557767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:49.828 [2024-04-24 19:32:14.557784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.832 ms 00:17:49.828 [2024-04-24 19:32:14.557792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.828 [2024-04-24 19:32:14.558233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.828 [2024-04-24 19:32:14.558254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:49.828 [2024-04-24 19:32:14.558266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:17:49.828 [2024-04-24 19:32:14.558275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.828 [2024-04-24 19:32:14.667203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.828 [2024-04-24 19:32:14.667268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:49.828 [2024-04-24 19:32:14.667286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.047 ms 00:17:49.828 [2024-04-24 19:32:14.667295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.828 [2024-04-24 19:32:14.712968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.828 [2024-04-24 19:32:14.713040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:49.828 [2024-04-24 19:32:14.713058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.681 ms 00:17:49.828 [2024-04-24 19:32:14.713067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.828 [2024-04-24 19:32:14.717208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.828 [2024-04-24 19:32:14.717242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:49.828 [2024-04-24 19:32:14.717256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.061 ms 00:17:49.828 [2024-04-24 19:32:14.717264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.828 [2024-04-24 19:32:14.762285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.828 [2024-04-24 19:32:14.762334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:49.828 [2024-04-24 19:32:14.762350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.015 ms 00:17:49.828 [2024-04-24 19:32:14.762359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.828 [2024-04-24 19:32:14.762438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.828 [2024-04-24 19:32:14.762450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:49.828 [2024-04-24 19:32:14.762461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:49.828 [2024-04-24 19:32:14.762469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.828 [2024-04-24 19:32:14.762628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.828 [2024-04-24 19:32:14.762662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:49.828 [2024-04-24 19:32:14.762674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:17:49.828 [2024-04-24 19:32:14.762682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.828 [2024-04-24 19:32:14.763896] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4962.013 ms, result 0 00:17:49.828 { 00:17:49.828 "name": "ftl0", 00:17:49.828 "uuid": "11dd4f3e-3fdc-4dd0-89a9-8390ef8740f8" 00:17:49.828 } 00:17:49.828 19:32:14 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:49.828 19:32:14 -- common/autotest_common.sh@885 -- # local bdev_name=ftl0 00:17:49.828 19:32:14 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:49.828 19:32:14 -- common/autotest_common.sh@887 -- # local i 00:17:49.828 19:32:14 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:49.828 19:32:14 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:49.828 19:32:14 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:49.828 19:32:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:49.828 [ 00:17:49.828 { 00:17:49.828 "name": "ftl0", 00:17:49.828 "aliases": [ 00:17:49.828 "11dd4f3e-3fdc-4dd0-89a9-8390ef8740f8" 00:17:49.828 ], 00:17:49.828 "product_name": "FTL disk", 00:17:49.828 "block_size": 4096, 00:17:49.828 "num_blocks": 20971520, 00:17:49.828 "uuid": "11dd4f3e-3fdc-4dd0-89a9-8390ef8740f8", 00:17:49.828 "assigned_rate_limits": { 00:17:49.828 "rw_ios_per_sec": 0, 00:17:49.828 "rw_mbytes_per_sec": 0, 00:17:49.828 "r_mbytes_per_sec": 0, 00:17:49.828 "w_mbytes_per_sec": 0 00:17:49.828 }, 00:17:49.828 "claimed": false, 00:17:49.828 "zoned": false, 00:17:49.828 "supported_io_types": { 00:17:49.828 "read": true, 00:17:49.828 "write": true, 00:17:49.828 "unmap": true, 00:17:49.828 "write_zeroes": true, 00:17:49.828 "flush": true, 00:17:49.828 "reset": false, 00:17:49.828 "compare": false, 00:17:49.828 "compare_and_write": false, 00:17:49.828 "abort": false, 00:17:49.828 "nvme_admin": false, 00:17:49.828 "nvme_io": false 00:17:49.828 }, 00:17:49.828 "driver_specific": { 00:17:49.828 "ftl": { 00:17:49.828 "base_bdev": "1e73d1ee-6585-493d-81af-3c5f1b4dd2f8", 00:17:49.828 "cache": "nvc0n1p0" 00:17:49.828 } 00:17:49.828 } 00:17:49.828 } 00:17:49.828 ] 00:17:49.828 19:32:15 -- common/autotest_common.sh@893 -- # return 0 00:17:49.828 19:32:15 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:49.828 19:32:15 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:49.828 19:32:15 -- ftl/fio.sh@70 -- # echo ']}' 00:17:49.828 19:32:15 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:50.087 [2024-04-24 19:32:15.586996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.087 [2024-04-24 19:32:15.587066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:50.087 [2024-04-24 19:32:15.587082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:50.087 [2024-04-24 19:32:15.587095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.087 [2024-04-24 19:32:15.587135] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:50.087 [2024-04-24 19:32:15.591200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.087 [2024-04-24 19:32:15.591247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:50.087 [2024-04-24 19:32:15.591261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.050 ms 00:17:50.087 [2024-04-24 19:32:15.591270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.087 [2024-04-24 19:32:15.591868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.087 [2024-04-24 19:32:15.591896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:50.087 [2024-04-24 19:32:15.591909] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:17:50.087 [2024-04-24 19:32:15.591931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.087 [2024-04-24 19:32:15.595109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.087 [2024-04-24 19:32:15.595136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:50.087 [2024-04-24 19:32:15.595150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.155 ms 00:17:50.087 [2024-04-24 19:32:15.595159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.087 [2024-04-24 19:32:15.600890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.087 [2024-04-24 19:32:15.600925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:50.087 [2024-04-24 19:32:15.600940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.706 ms 00:17:50.087 [2024-04-24 19:32:15.600949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.087 [2024-04-24 19:32:15.647219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.087 [2024-04-24 19:32:15.647288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:50.087 [2024-04-24 19:32:15.647316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.228 ms 00:17:50.087 [2024-04-24 19:32:15.647325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.087 [2024-04-24 19:32:15.674447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.087 [2024-04-24 19:32:15.674522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:50.087 [2024-04-24 19:32:15.674544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.059 ms 00:17:50.087 [2024-04-24 19:32:15.674553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.088 [2024-04-24 19:32:15.674861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.088 [2024-04-24 19:32:15.674908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:50.088 [2024-04-24 19:32:15.674938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:17:50.088 [2024-04-24 19:32:15.674947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.088 [2024-04-24 19:32:15.720013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.088 [2024-04-24 19:32:15.720084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:50.088 [2024-04-24 19:32:15.720107] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.090 ms 00:17:50.088 [2024-04-24 19:32:15.720116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.349 [2024-04-24 19:32:15.767780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.349 [2024-04-24 19:32:15.767847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:50.349 [2024-04-24 19:32:15.767865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.648 ms 00:17:50.349 [2024-04-24 19:32:15.767874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.349 [2024-04-24 19:32:15.815762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.349 [2024-04-24 19:32:15.815832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:50.349 [2024-04-24 19:32:15.815852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.875 ms 00:17:50.349 [2024-04-24 19:32:15.815862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.349 [2024-04-24 19:32:15.864830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.349 [2024-04-24 19:32:15.864890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:50.349 [2024-04-24 19:32:15.864925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.854 ms 00:17:50.349 [2024-04-24 19:32:15.864933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.349 [2024-04-24 19:32:15.865023] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:50.349 [2024-04-24 19:32:15.865041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:50.349 [2024-04-24 19:32:15.865811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.865977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.866012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.866024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.866033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.866044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.866053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.866067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.866077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.866088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:50.350 [2024-04-24 19:32:15.866106] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:50.350 [2024-04-24 19:32:15.866118] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 11dd4f3e-3fdc-4dd0-89a9-8390ef8740f8 00:17:50.350 [2024-04-24 19:32:15.866127] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:50.350 [2024-04-24 19:32:15.866138] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:50.350 [2024-04-24 19:32:15.866146] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:50.350 [2024-04-24 19:32:15.866157] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:50.350 [2024-04-24 19:32:15.866166] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:50.350 [2024-04-24 19:32:15.866180] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:50.350 [2024-04-24 19:32:15.866188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:50.350 [2024-04-24 19:32:15.866197] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:50.350 [2024-04-24 19:32:15.866205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:50.350 [2024-04-24 19:32:15.866217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.350 [2024-04-24 19:32:15.866226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:50.350 [2024-04-24 19:32:15.866240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:17:50.350 [2024-04-24 19:32:15.866249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.350 [2024-04-24 19:32:15.889365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.350 [2024-04-24 19:32:15.889421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:50.350 [2024-04-24 19:32:15.889437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.067 ms 00:17:50.350 [2024-04-24 19:32:15.889449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.350 [2024-04-24 19:32:15.889815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.350 [2024-04-24 19:32:15.889832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:50.350 [2024-04-24 19:32:15.889844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:17:50.350 [2024-04-24 19:32:15.889853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.350 [2024-04-24 19:32:15.970238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.350 [2024-04-24 19:32:15.970296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:50.350 [2024-04-24 19:32:15.970315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.350 [2024-04-24 19:32:15.970340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.350 [2024-04-24 19:32:15.970422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.350 [2024-04-24 19:32:15.970432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:50.350 [2024-04-24 19:32:15.970445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.350 [2024-04-24 19:32:15.970454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.350 [2024-04-24 19:32:15.970586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.350 [2024-04-24 19:32:15.970599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:50.350 [2024-04-24 19:32:15.970611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.350 [2024-04-24 19:32:15.970622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.350 [2024-04-24 19:32:15.970661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.350 [2024-04-24 19:32:15.970671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:50.350 [2024-04-24 19:32:15.970680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.350 [2024-04-24 19:32:15.970688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.610 [2024-04-24 19:32:16.129520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.610 [2024-04-24 19:32:16.129581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:50.610 [2024-04-24 19:32:16.129598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.610 [2024-04-24 19:32:16.129609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.610 [2024-04-24 19:32:16.182806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.610 [2024-04-24 19:32:16.182864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:50.610 [2024-04-24 19:32:16.182883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.610 [2024-04-24 19:32:16.182892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.610 [2024-04-24 19:32:16.183010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.610 [2024-04-24 19:32:16.183022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.610 [2024-04-24 19:32:16.183032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.610 [2024-04-24 19:32:16.183041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.610 [2024-04-24 19:32:16.183113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.610 [2024-04-24 19:32:16.183124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.610 [2024-04-24 19:32:16.183152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.610 [2024-04-24 19:32:16.183160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.610 [2024-04-24 19:32:16.183301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.610 [2024-04-24 19:32:16.183322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.610 [2024-04-24 19:32:16.183334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.610 [2024-04-24 19:32:16.183343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.610 [2024-04-24 19:32:16.183402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.610 [2024-04-24 19:32:16.183417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:50.610 [2024-04-24 19:32:16.183431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.610 [2024-04-24 19:32:16.183440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.610 [2024-04-24 19:32:16.183497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.610 [2024-04-24 19:32:16.183508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.610 [2024-04-24 19:32:16.183519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.610 [2024-04-24 19:32:16.183528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.610 [2024-04-24 19:32:16.183593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:50.610 [2024-04-24 19:32:16.183608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.610 [2024-04-24 19:32:16.183619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:50.610 [2024-04-24 19:32:16.183645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.610 [2024-04-24 19:32:16.183825] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 597.946 ms, result 0 00:17:50.610 true 00:17:50.610 19:32:16 -- ftl/fio.sh@75 -- # killprocess 77690 00:17:50.610 19:32:16 -- common/autotest_common.sh@936 -- # '[' -z 77690 ']' 00:17:50.610 19:32:16 -- common/autotest_common.sh@940 -- # kill -0 77690 00:17:50.610 19:32:16 -- common/autotest_common.sh@941 -- # uname 00:17:50.610 19:32:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.610 19:32:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77690 00:17:50.610 killing process with pid 77690 00:17:50.610 19:32:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:50.610 19:32:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:50.610 19:32:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77690' 00:17:50.610 19:32:16 -- common/autotest_common.sh@955 -- # kill 77690 00:17:50.610 19:32:16 -- common/autotest_common.sh@960 -- # wait 77690 00:17:58.727 19:32:23 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:58.727 19:32:23 -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:58.727 19:32:23 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:58.727 19:32:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:58.727 19:32:23 -- common/autotest_common.sh@10 -- # set +x 00:17:58.727 19:32:23 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:58.727 19:32:23 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:58.727 19:32:23 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:58.727 19:32:23 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:58.727 19:32:23 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:58.728 19:32:23 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:58.728 19:32:23 -- common/autotest_common.sh@1327 -- # shift 00:17:58.728 19:32:23 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:58.728 19:32:23 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:58.728 19:32:23 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:58.728 19:32:23 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:58.728 19:32:23 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:58.728 19:32:23 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:58.728 19:32:23 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:58.728 19:32:23 -- common/autotest_common.sh@1333 -- # break 00:17:58.728 19:32:23 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:58.728 19:32:23 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:58.728 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:58.728 fio-3.35 00:17:58.728 Starting 1 thread 00:18:02.920 00:18:02.920 test: (groupid=0, jobs=1): err= 0: pid=77975: Wed Apr 24 19:32:28 2024 00:18:02.920 read: IOPS=1219, BW=81.0MiB/s (84.9MB/s)(255MiB/3142msec) 00:18:02.920 slat (nsec): min=4587, max=27049, avg=6965.49, stdev=2658.16 00:18:02.920 clat (usec): min=252, max=864, avg=359.09, stdev=48.26 00:18:02.920 lat (usec): min=262, max=871, avg=366.06, stdev=49.04 00:18:02.920 clat percentiles (usec): 00:18:02.920 | 1.00th=[ 269], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 330], 00:18:02.920 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 338], 60.00th=[ 347], 00:18:02.920 | 70.00th=[ 355], 80.00th=[ 400], 90.00th=[ 420], 95.00th=[ 461], 00:18:02.920 | 99.00th=[ 519], 99.50th=[ 537], 99.90th=[ 734], 99.95th=[ 857], 00:18:02.920 | 99.99th=[ 865] 00:18:02.920 write: IOPS=1228, BW=81.6MiB/s (85.5MB/s)(256MiB/3139msec); 0 zone resets 00:18:02.920 slat (nsec): min=15837, max=95608, avg=22773.49, stdev=6448.79 00:18:02.920 clat (usec): min=296, max=1915, avg=415.59, stdev=68.89 00:18:02.920 lat (usec): min=317, max=1936, avg=438.36, stdev=69.33 00:18:02.920 clat percentiles (usec): 00:18:02.920 | 1.00th=[ 343], 5.00th=[ 351], 10.00th=[ 355], 20.00th=[ 359], 00:18:02.920 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 416], 60.00th=[ 424], 00:18:02.920 | 70.00th=[ 433], 80.00th=[ 457], 90.00th=[ 498], 95.00th=[ 519], 00:18:02.920 | 99.00th=[ 644], 99.50th=[ 693], 99.90th=[ 775], 99.95th=[ 840], 00:18:02.920 | 99.99th=[ 1909] 00:18:02.920 bw ( KiB/s): min=73440, max=86904, per=99.97%, avg=83504.00, stdev=5087.93, samples=6 00:18:02.920 iops : min= 1080, max= 1278, avg=1228.00, stdev=74.82, samples=6 00:18:02.920 lat (usec) : 500=95.11%, 750=4.73%, 1000=0.14% 00:18:02.920 lat (msec) : 2=0.01% 00:18:02.920 cpu : usr=99.20%, sys=0.16%, ctx=7, majf=0, minf=1171 00:18:02.920 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:02.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.920 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.920 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:02.920 00:18:02.920 Run status group 0 (all jobs): 00:18:02.920 READ: bw=81.0MiB/s (84.9MB/s), 81.0MiB/s-81.0MiB/s (84.9MB/s-84.9MB/s), io=255MiB (267MB), run=3142-3142msec 00:18:02.920 WRITE: bw=81.6MiB/s (85.5MB/s), 81.6MiB/s-81.6MiB/s (85.5MB/s-85.5MB/s), io=256MiB (269MB), run=3139-3139msec 00:18:05.456 ----------------------------------------------------- 00:18:05.456 Suppressions used: 00:18:05.456 count bytes template 00:18:05.456 1 5 /usr/src/fio/parse.c 00:18:05.456 1 8 libtcmalloc_minimal.so 00:18:05.456 1 904 libcrypto.so 00:18:05.456 ----------------------------------------------------- 00:18:05.456 00:18:05.456 19:32:30 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:05.456 19:32:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:05.456 19:32:30 -- common/autotest_common.sh@10 -- # set +x 00:18:05.456 19:32:30 -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:05.456 19:32:30 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:05.456 19:32:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:05.456 19:32:30 -- common/autotest_common.sh@10 -- # set +x 00:18:05.456 19:32:30 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:05.456 19:32:30 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:05.456 19:32:30 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:18:05.456 19:32:30 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:05.456 19:32:30 -- common/autotest_common.sh@1325 -- # local sanitizers 00:18:05.456 19:32:30 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:05.456 19:32:30 -- common/autotest_common.sh@1327 -- # shift 00:18:05.456 19:32:30 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:18:05.456 19:32:30 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:05.456 19:32:30 -- common/autotest_common.sh@1331 -- # grep libasan 00:18:05.456 19:32:30 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:05.456 19:32:30 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:05.456 19:32:30 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:05.456 19:32:30 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:05.456 19:32:30 -- common/autotest_common.sh@1333 -- # break 00:18:05.456 19:32:30 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:05.456 19:32:30 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:05.456 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:05.456 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:05.456 fio-3.35 00:18:05.456 Starting 2 threads 00:18:32.066 00:18:32.066 first_half: (groupid=0, jobs=1): err= 0: pid=78078: Wed Apr 24 19:32:56 2024 00:18:32.066 read: IOPS=2682, BW=10.5MiB/s (11.0MB/s)(256MiB/24411msec) 00:18:32.066 slat (nsec): min=4042, max=47175, avg=6953.84, stdev=1699.95 00:18:32.066 clat (usec): min=667, max=353660, avg=39505.59, stdev=28494.01 00:18:32.066 lat (usec): min=672, max=353665, avg=39512.54, stdev=28494.35 00:18:32.066 clat percentiles (msec): 00:18:32.066 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 32], 00:18:32.066 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:18:32.066 | 70.00th=[ 34], 80.00th=[ 39], 90.00th=[ 42], 95.00th=[ 87], 00:18:32.066 | 99.00th=[ 184], 99.50th=[ 207], 99.90th=[ 264], 99.95th=[ 313], 00:18:32.066 | 99.99th=[ 347] 00:18:32.066 write: IOPS=2687, BW=10.5MiB/s (11.0MB/s)(256MiB/24381msec); 0 zone resets 00:18:32.066 slat (usec): min=5, max=2163, avg= 8.46, stdev=10.04 00:18:32.066 clat (usec): min=362, max=63920, avg=8179.26, stdev=7727.15 00:18:32.066 lat (usec): min=370, max=63928, avg=8187.73, stdev=7727.34 00:18:32.066 clat percentiles (usec): 00:18:32.066 | 1.00th=[ 1090], 5.00th=[ 1467], 10.00th=[ 1844], 20.00th=[ 3163], 00:18:32.066 | 30.00th=[ 4359], 40.00th=[ 5604], 50.00th=[ 6456], 60.00th=[ 7242], 00:18:32.066 | 70.00th=[ 8291], 80.00th=[10945], 90.00th=[14746], 95.00th=[23462], 00:18:32.066 | 99.00th=[40633], 99.50th=[43254], 99.90th=[60031], 99.95th=[61604], 00:18:32.066 | 99.99th=[63177] 00:18:32.066 bw ( KiB/s): min= 40, max=48032, per=100.00%, avg=22640.30, stdev=14279.35, samples=23 00:18:32.066 iops : min= 10, max=12008, avg=5660.04, stdev=3569.88, samples=23 00:18:32.066 lat (usec) : 500=0.01%, 750=0.09%, 1000=0.23% 00:18:32.066 lat (msec) : 2=5.60%, 4=7.17%, 10=27.13%, 20=8.47%, 50=47.29% 00:18:32.066 lat (msec) : 100=1.83%, 250=2.12%, 500=0.05% 00:18:32.066 cpu : usr=99.23%, sys=0.17%, ctx=52, majf=0, minf=5546 00:18:32.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:32.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.066 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:32.066 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.066 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:32.066 second_half: (groupid=0, jobs=1): err= 0: pid=78079: Wed Apr 24 19:32:56 2024 00:18:32.066 read: IOPS=2703, BW=10.6MiB/s (11.1MB/s)(256MiB/24226msec) 00:18:32.066 slat (usec): min=4, max=156, avg= 6.98, stdev= 1.74 00:18:32.066 clat (msec): min=9, max=260, avg=40.06, stdev=26.70 00:18:32.066 lat (msec): min=9, max=260, avg=40.07, stdev=26.71 00:18:32.066 clat percentiles (msec): 00:18:32.066 | 1.00th=[ 30], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 32], 00:18:32.066 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 33], 00:18:32.066 | 70.00th=[ 35], 80.00th=[ 39], 90.00th=[ 45], 95.00th=[ 77], 00:18:32.066 | 99.00th=[ 182], 99.50th=[ 203], 99.90th=[ 243], 99.95th=[ 253], 00:18:32.066 | 99.99th=[ 259] 00:18:32.066 write: IOPS=2719, BW=10.6MiB/s (11.1MB/s)(256MiB/24102msec); 0 zone resets 00:18:32.066 slat (usec): min=4, max=2538, avg= 8.39, stdev=12.28 00:18:32.066 clat (usec): min=415, max=69151, avg=7256.85, stdev=4848.15 00:18:32.066 lat (usec): min=430, max=69158, avg=7265.25, stdev=4848.54 00:18:32.066 clat percentiles (usec): 00:18:32.066 | 1.00th=[ 1221], 5.00th=[ 2114], 10.00th=[ 2802], 20.00th=[ 3949], 00:18:32.066 | 30.00th=[ 5080], 40.00th=[ 5669], 50.00th=[ 6390], 60.00th=[ 6849], 00:18:32.066 | 70.00th=[ 7504], 80.00th=[ 9765], 90.00th=[13698], 95.00th=[14877], 00:18:32.066 | 99.00th=[21627], 99.50th=[27657], 99.90th=[63701], 99.95th=[66847], 00:18:32.066 | 99.99th=[68682] 00:18:32.066 bw ( KiB/s): min= 2640, max=47512, per=100.00%, avg=23652.32, stdev=13217.46, samples=22 00:18:32.066 iops : min= 660, max=11878, avg=5913.14, stdev=3304.50, samples=22 00:18:32.066 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.17% 00:18:32.066 lat (msec) : 2=1.85%, 4=8.15%, 10=29.96%, 20=9.18%, 50=46.48% 00:18:32.066 lat (msec) : 100=2.09%, 250=2.01%, 500=0.03% 00:18:32.066 cpu : usr=99.22%, sys=0.19%, ctx=151, majf=0, minf=5563 00:18:32.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:32.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.066 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:32.066 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.066 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:32.066 00:18:32.066 Run status group 0 (all jobs): 00:18:32.066 READ: bw=21.0MiB/s (22.0MB/s), 10.5MiB/s-10.6MiB/s (11.0MB/s-11.1MB/s), io=512MiB (536MB), run=24226-24411msec 00:18:32.066 WRITE: bw=21.0MiB/s (22.0MB/s), 10.5MiB/s-10.6MiB/s (11.0MB/s-11.1MB/s), io=512MiB (537MB), run=24102-24381msec 00:18:34.601 ----------------------------------------------------- 00:18:34.601 Suppressions used: 00:18:34.601 count bytes template 00:18:34.601 2 10 /usr/src/fio/parse.c 00:18:34.601 4 384 /usr/src/fio/iolog.c 00:18:34.601 1 8 libtcmalloc_minimal.so 00:18:34.601 1 904 libcrypto.so 00:18:34.601 ----------------------------------------------------- 00:18:34.602 00:18:34.602 19:32:59 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:34.602 19:32:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:34.602 19:32:59 -- common/autotest_common.sh@10 -- # set +x 00:18:34.602 19:32:59 -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:34.602 19:32:59 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:34.602 19:32:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:34.602 19:32:59 -- common/autotest_common.sh@10 -- # set +x 00:18:34.602 19:32:59 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:34.602 19:32:59 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:34.602 19:32:59 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:18:34.602 19:32:59 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:34.602 19:32:59 -- common/autotest_common.sh@1325 -- # local sanitizers 00:18:34.602 19:32:59 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:34.602 19:32:59 -- common/autotest_common.sh@1327 -- # shift 00:18:34.602 19:32:59 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:18:34.602 19:32:59 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:18:34.602 19:32:59 -- common/autotest_common.sh@1331 -- # grep libasan 00:18:34.602 19:32:59 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:34.602 19:32:59 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:18:34.602 19:32:59 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:34.602 19:32:59 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:34.602 19:32:59 -- common/autotest_common.sh@1333 -- # break 00:18:34.602 19:32:59 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:34.602 19:32:59 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:34.602 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:34.602 fio-3.35 00:18:34.602 Starting 1 thread 00:18:52.716 00:18:52.716 test: (groupid=0, jobs=1): err= 0: pid=78409: Wed Apr 24 19:33:15 2024 00:18:52.716 read: IOPS=7763, BW=30.3MiB/s (31.8MB/s)(255MiB/8398msec) 00:18:52.716 slat (nsec): min=3962, max=39376, avg=5853.91, stdev=1231.83 00:18:52.716 clat (usec): min=734, max=31881, avg=16476.67, stdev=1124.18 00:18:52.716 lat (usec): min=738, max=31887, avg=16482.53, stdev=1124.20 00:18:52.716 clat percentiles (usec): 00:18:52.716 | 1.00th=[15533], 5.00th=[15795], 10.00th=[15926], 20.00th=[16057], 00:18:52.716 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16319], 60.00th=[16450], 00:18:52.716 | 70.00th=[16581], 80.00th=[16712], 90.00th=[16909], 95.00th=[17695], 00:18:52.716 | 99.00th=[20055], 99.50th=[23462], 99.90th=[29492], 99.95th=[30278], 00:18:52.716 | 99.99th=[31327] 00:18:52.716 write: IOPS=12.3k, BW=48.1MiB/s (50.5MB/s)(256MiB/5317msec); 0 zone resets 00:18:52.716 slat (usec): min=5, max=712, avg= 9.03, stdev= 7.26 00:18:52.716 clat (usec): min=641, max=60854, avg=10335.49, stdev=12605.64 00:18:52.716 lat (usec): min=651, max=60862, avg=10344.52, stdev=12605.63 00:18:52.716 clat percentiles (usec): 00:18:52.716 | 1.00th=[ 1012], 5.00th=[ 1237], 10.00th=[ 1401], 20.00th=[ 1598], 00:18:52.716 | 30.00th=[ 1778], 40.00th=[ 2278], 50.00th=[ 6783], 60.00th=[ 7963], 00:18:52.716 | 70.00th=[ 9241], 80.00th=[11207], 90.00th=[35914], 95.00th=[38011], 00:18:52.716 | 99.00th=[48497], 99.50th=[53740], 99.90th=[57934], 99.95th=[58983], 00:18:52.716 | 99.99th=[60031] 00:18:52.716 bw ( KiB/s): min=27344, max=60464, per=96.67%, avg=47662.55, stdev=10051.87, samples=11 00:18:52.716 iops : min= 6836, max=15116, avg=11915.64, stdev=2512.97, samples=11 00:18:52.716 lat (usec) : 750=0.01%, 1000=0.43% 00:18:52.716 lat (msec) : 2=18.29%, 4=2.35%, 10=16.44%, 20=53.93%, 50=8.15% 00:18:52.716 lat (msec) : 100=0.40% 00:18:52.716 cpu : usr=99.09%, sys=0.28%, ctx=30, majf=0, minf=5567 00:18:52.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:52.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:52.716 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:52.716 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:52.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:52.716 00:18:52.716 Run status group 0 (all jobs): 00:18:52.716 READ: bw=30.3MiB/s (31.8MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=255MiB (267MB), run=8398-8398msec 00:18:52.716 WRITE: bw=48.1MiB/s (50.5MB/s), 48.1MiB/s-48.1MiB/s (50.5MB/s-50.5MB/s), io=256MiB (268MB), run=5317-5317msec 00:18:52.716 ----------------------------------------------------- 00:18:52.716 Suppressions used: 00:18:52.716 count bytes template 00:18:52.716 1 5 /usr/src/fio/parse.c 00:18:52.716 2 192 /usr/src/fio/iolog.c 00:18:52.716 1 8 libtcmalloc_minimal.so 00:18:52.716 1 904 libcrypto.so 00:18:52.716 ----------------------------------------------------- 00:18:52.716 00:18:52.716 19:33:17 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:52.716 19:33:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:52.716 19:33:17 -- common/autotest_common.sh@10 -- # set +x 00:18:52.716 19:33:17 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:52.716 19:33:17 -- ftl/fio.sh@85 -- # remove_shm 00:18:52.716 Remove shared memory files 00:18:52.716 19:33:17 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:52.716 19:33:17 -- ftl/common.sh@205 -- # rm -f rm -f 00:18:52.716 19:33:17 -- ftl/common.sh@206 -- # rm -f rm -f 00:18:52.716 19:33:17 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid61709 /dev/shm/spdk_tgt_trace.pid76590 00:18:52.716 19:33:17 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:52.716 19:33:17 -- ftl/common.sh@209 -- # rm -f rm -f 00:18:52.716 00:18:52.716 real 1m12.718s 00:18:52.716 user 2m39.582s 00:18:52.716 sys 0m3.531s 00:18:52.716 19:33:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:52.716 19:33:17 -- common/autotest_common.sh@10 -- # set +x 00:18:52.716 ************************************ 00:18:52.716 END TEST ftl_fio_basic 00:18:52.716 ************************************ 00:18:52.716 19:33:17 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:52.716 19:33:17 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:52.716 19:33:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:52.716 19:33:17 -- common/autotest_common.sh@10 -- # set +x 00:18:52.716 ************************************ 00:18:52.716 START TEST ftl_bdevperf 00:18:52.716 ************************************ 00:18:52.716 19:33:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:52.716 * Looking for test storage... 00:18:52.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:52.716 19:33:18 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:52.716 19:33:18 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:52.717 19:33:18 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:52.717 19:33:18 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:52.717 19:33:18 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:52.717 19:33:18 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:52.717 19:33:18 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:52.717 19:33:18 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:52.717 19:33:18 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:52.717 19:33:18 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:52.717 19:33:18 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:52.717 19:33:18 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:52.717 19:33:18 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:52.717 19:33:18 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:52.717 19:33:18 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:52.717 19:33:18 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:52.717 19:33:18 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:52.717 19:33:18 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:52.717 19:33:18 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:52.717 19:33:18 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:52.717 19:33:18 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:52.717 19:33:18 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:52.717 19:33:18 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:52.717 19:33:18 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:52.717 19:33:18 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:52.717 19:33:18 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:52.717 19:33:18 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:52.717 19:33:18 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:52.717 19:33:18 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:52.717 19:33:18 -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:52.717 19:33:18 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:52.717 19:33:18 -- ftl/bdevperf.sh@13 -- # use_append= 00:18:52.717 19:33:18 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:52.717 19:33:18 -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:52.717 19:33:18 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:52.717 19:33:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:52.717 19:33:18 -- common/autotest_common.sh@10 -- # set +x 00:18:52.717 19:33:18 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=78652 00:18:52.717 19:33:18 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:52.717 19:33:18 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:52.717 19:33:18 -- ftl/bdevperf.sh@22 -- # waitforlisten 78652 00:18:52.717 19:33:18 -- common/autotest_common.sh@817 -- # '[' -z 78652 ']' 00:18:52.717 19:33:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.717 19:33:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:52.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.717 19:33:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.717 19:33:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:52.717 19:33:18 -- common/autotest_common.sh@10 -- # set +x 00:18:52.717 [2024-04-24 19:33:18.187169] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:18:52.717 [2024-04-24 19:33:18.187294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78652 ] 00:18:52.717 [2024-04-24 19:33:18.337969] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.976 [2024-04-24 19:33:18.599694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.545 19:33:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:53.545 19:33:19 -- common/autotest_common.sh@850 -- # return 0 00:18:53.545 19:33:19 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:53.545 19:33:19 -- ftl/common.sh@54 -- # local name=nvme0 00:18:53.545 19:33:19 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:53.545 19:33:19 -- ftl/common.sh@56 -- # local size=103424 00:18:53.545 19:33:19 -- ftl/common.sh@59 -- # local base_bdev 00:18:53.545 19:33:19 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:53.805 19:33:19 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:53.805 19:33:19 -- ftl/common.sh@62 -- # local base_size 00:18:53.805 19:33:19 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:53.805 19:33:19 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:18:53.805 19:33:19 -- common/autotest_common.sh@1365 -- # local bdev_info 00:18:53.805 19:33:19 -- common/autotest_common.sh@1366 -- # local bs 00:18:53.805 19:33:19 -- common/autotest_common.sh@1367 -- # local nb 00:18:53.805 19:33:19 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:54.064 19:33:19 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:18:54.064 { 00:18:54.064 "name": "nvme0n1", 00:18:54.064 "aliases": [ 00:18:54.064 "f0d6161e-af3b-4de7-ae64-364731c9715e" 00:18:54.064 ], 00:18:54.064 "product_name": "NVMe disk", 00:18:54.064 "block_size": 4096, 00:18:54.064 "num_blocks": 1310720, 00:18:54.064 "uuid": "f0d6161e-af3b-4de7-ae64-364731c9715e", 00:18:54.064 "assigned_rate_limits": { 00:18:54.064 "rw_ios_per_sec": 0, 00:18:54.064 "rw_mbytes_per_sec": 0, 00:18:54.064 "r_mbytes_per_sec": 0, 00:18:54.064 "w_mbytes_per_sec": 0 00:18:54.064 }, 00:18:54.064 "claimed": true, 00:18:54.064 "claim_type": "read_many_write_one", 00:18:54.064 "zoned": false, 00:18:54.064 "supported_io_types": { 00:18:54.064 "read": true, 00:18:54.064 "write": true, 00:18:54.064 "unmap": true, 00:18:54.064 "write_zeroes": true, 00:18:54.064 "flush": true, 00:18:54.064 "reset": true, 00:18:54.064 "compare": true, 00:18:54.064 "compare_and_write": false, 00:18:54.064 "abort": true, 00:18:54.064 "nvme_admin": true, 00:18:54.064 "nvme_io": true 00:18:54.064 }, 00:18:54.064 "driver_specific": { 00:18:54.064 "nvme": [ 00:18:54.064 { 00:18:54.064 "pci_address": "0000:00:11.0", 00:18:54.064 "trid": { 00:18:54.064 "trtype": "PCIe", 00:18:54.064 "traddr": "0000:00:11.0" 00:18:54.064 }, 00:18:54.064 "ctrlr_data": { 00:18:54.064 "cntlid": 0, 00:18:54.064 "vendor_id": "0x1b36", 00:18:54.064 "model_number": "QEMU NVMe Ctrl", 00:18:54.064 "serial_number": "12341", 00:18:54.064 "firmware_revision": "8.0.0", 00:18:54.064 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:54.064 "oacs": { 00:18:54.064 "security": 0, 00:18:54.064 "format": 1, 00:18:54.064 "firmware": 0, 00:18:54.064 "ns_manage": 1 00:18:54.064 }, 00:18:54.064 "multi_ctrlr": false, 00:18:54.064 "ana_reporting": false 00:18:54.064 }, 00:18:54.064 "vs": { 00:18:54.064 "nvme_version": "1.4" 00:18:54.064 }, 00:18:54.064 "ns_data": { 00:18:54.064 "id": 1, 00:18:54.064 "can_share": false 00:18:54.064 } 00:18:54.064 } 00:18:54.064 ], 00:18:54.064 "mp_policy": "active_passive" 00:18:54.064 } 00:18:54.064 } 00:18:54.064 ]' 00:18:54.064 19:33:19 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:18:54.064 19:33:19 -- common/autotest_common.sh@1369 -- # bs=4096 00:18:54.064 19:33:19 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:18:54.064 19:33:19 -- common/autotest_common.sh@1370 -- # nb=1310720 00:18:54.064 19:33:19 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:18:54.064 19:33:19 -- common/autotest_common.sh@1374 -- # echo 5120 00:18:54.064 19:33:19 -- ftl/common.sh@63 -- # base_size=5120 00:18:54.064 19:33:19 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:54.064 19:33:19 -- ftl/common.sh@67 -- # clear_lvols 00:18:54.064 19:33:19 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:54.064 19:33:19 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:54.331 19:33:19 -- ftl/common.sh@28 -- # stores=6bd73d18-1e55-4934-b11b-3e3b0ca2d939 00:18:54.331 19:33:19 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:54.331 19:33:19 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6bd73d18-1e55-4934-b11b-3e3b0ca2d939 00:18:54.589 19:33:20 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:54.848 19:33:20 -- ftl/common.sh@68 -- # lvs=caf50340-a6c0-4f2c-87de-e4f64dbfc082 00:18:54.848 19:33:20 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u caf50340-a6c0-4f2c-87de-e4f64dbfc082 00:18:54.848 19:33:20 -- ftl/bdevperf.sh@23 -- # split_bdev=f08faeb6-0a66-4c94-8a9d-d4212a4450b5 00:18:54.848 19:33:20 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f08faeb6-0a66-4c94-8a9d-d4212a4450b5 00:18:54.848 19:33:20 -- ftl/common.sh@35 -- # local name=nvc0 00:18:54.848 19:33:20 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:54.848 19:33:20 -- ftl/common.sh@37 -- # local base_bdev=f08faeb6-0a66-4c94-8a9d-d4212a4450b5 00:18:54.848 19:33:20 -- ftl/common.sh@38 -- # local cache_size= 00:18:54.848 19:33:20 -- ftl/common.sh@41 -- # get_bdev_size f08faeb6-0a66-4c94-8a9d-d4212a4450b5 00:18:54.848 19:33:20 -- common/autotest_common.sh@1364 -- # local bdev_name=f08faeb6-0a66-4c94-8a9d-d4212a4450b5 00:18:54.848 19:33:20 -- common/autotest_common.sh@1365 -- # local bdev_info 00:18:54.848 19:33:20 -- common/autotest_common.sh@1366 -- # local bs 00:18:55.107 19:33:20 -- common/autotest_common.sh@1367 -- # local nb 00:18:55.107 19:33:20 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f08faeb6-0a66-4c94-8a9d-d4212a4450b5 00:18:55.365 19:33:20 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:18:55.365 { 00:18:55.365 "name": "f08faeb6-0a66-4c94-8a9d-d4212a4450b5", 00:18:55.365 "aliases": [ 00:18:55.365 "lvs/nvme0n1p0" 00:18:55.365 ], 00:18:55.365 "product_name": "Logical Volume", 00:18:55.365 "block_size": 4096, 00:18:55.365 "num_blocks": 26476544, 00:18:55.365 "uuid": "f08faeb6-0a66-4c94-8a9d-d4212a4450b5", 00:18:55.365 "assigned_rate_limits": { 00:18:55.365 "rw_ios_per_sec": 0, 00:18:55.365 "rw_mbytes_per_sec": 0, 00:18:55.365 "r_mbytes_per_sec": 0, 00:18:55.365 "w_mbytes_per_sec": 0 00:18:55.365 }, 00:18:55.365 "claimed": false, 00:18:55.365 "zoned": false, 00:18:55.365 "supported_io_types": { 00:18:55.365 "read": true, 00:18:55.365 "write": true, 00:18:55.365 "unmap": true, 00:18:55.365 "write_zeroes": true, 00:18:55.365 "flush": false, 00:18:55.365 "reset": true, 00:18:55.365 "compare": false, 00:18:55.365 "compare_and_write": false, 00:18:55.365 "abort": false, 00:18:55.365 "nvme_admin": false, 00:18:55.365 "nvme_io": false 00:18:55.365 }, 00:18:55.365 "driver_specific": { 00:18:55.365 "lvol": { 00:18:55.365 "lvol_store_uuid": "caf50340-a6c0-4f2c-87de-e4f64dbfc082", 00:18:55.365 "base_bdev": "nvme0n1", 00:18:55.365 "thin_provision": true, 00:18:55.365 "snapshot": false, 00:18:55.365 "clone": false, 00:18:55.365 "esnap_clone": false 00:18:55.365 } 00:18:55.365 } 00:18:55.365 } 00:18:55.365 ]' 00:18:55.365 19:33:20 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:18:55.365 19:33:20 -- common/autotest_common.sh@1369 -- # bs=4096 00:18:55.365 19:33:20 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:18:55.365 19:33:20 -- common/autotest_common.sh@1370 -- # nb=26476544 00:18:55.365 19:33:20 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:18:55.365 19:33:20 -- common/autotest_common.sh@1374 -- # echo 103424 00:18:55.365 19:33:20 -- ftl/common.sh@41 -- # local base_size=5171 00:18:55.365 19:33:20 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:55.365 19:33:20 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:55.623 19:33:21 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:55.623 19:33:21 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:55.623 19:33:21 -- ftl/common.sh@48 -- # get_bdev_size f08faeb6-0a66-4c94-8a9d-d4212a4450b5 00:18:55.623 19:33:21 -- common/autotest_common.sh@1364 -- # local bdev_name=f08faeb6-0a66-4c94-8a9d-d4212a4450b5 00:18:55.623 19:33:21 -- common/autotest_common.sh@1365 -- # local bdev_info 00:18:55.623 19:33:21 -- common/autotest_common.sh@1366 -- # local bs 00:18:55.623 19:33:21 -- common/autotest_common.sh@1367 -- # local nb 00:18:55.623 19:33:21 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f08faeb6-0a66-4c94-8a9d-d4212a4450b5 00:18:55.881 19:33:21 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:18:55.881 { 00:18:55.881 "name": "f08faeb6-0a66-4c94-8a9d-d4212a4450b5", 00:18:55.881 "aliases": [ 00:18:55.881 "lvs/nvme0n1p0" 00:18:55.881 ], 00:18:55.881 "product_name": "Logical Volume", 00:18:55.881 "block_size": 4096, 00:18:55.881 "num_blocks": 26476544, 00:18:55.881 "uuid": "f08faeb6-0a66-4c94-8a9d-d4212a4450b5", 00:18:55.881 "assigned_rate_limits": { 00:18:55.881 "rw_ios_per_sec": 0, 00:18:55.881 "rw_mbytes_per_sec": 0, 00:18:55.881 "r_mbytes_per_sec": 0, 00:18:55.881 "w_mbytes_per_sec": 0 00:18:55.881 }, 00:18:55.881 "claimed": false, 00:18:55.881 "zoned": false, 00:18:55.881 "supported_io_types": { 00:18:55.881 "read": true, 00:18:55.881 "write": true, 00:18:55.881 "unmap": true, 00:18:55.881 "write_zeroes": true, 00:18:55.881 "flush": false, 00:18:55.881 "reset": true, 00:18:55.881 "compare": false, 00:18:55.881 "compare_and_write": false, 00:18:55.881 "abort": false, 00:18:55.881 "nvme_admin": false, 00:18:55.881 "nvme_io": false 00:18:55.881 }, 00:18:55.881 "driver_specific": { 00:18:55.881 "lvol": { 00:18:55.881 "lvol_store_uuid": "caf50340-a6c0-4f2c-87de-e4f64dbfc082", 00:18:55.881 "base_bdev": "nvme0n1", 00:18:55.881 "thin_provision": true, 00:18:55.881 "snapshot": false, 00:18:55.881 "clone": false, 00:18:55.881 "esnap_clone": false 00:18:55.881 } 00:18:55.881 } 00:18:55.881 } 00:18:55.881 ]' 00:18:55.881 19:33:21 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:18:55.881 19:33:21 -- common/autotest_common.sh@1369 -- # bs=4096 00:18:55.881 19:33:21 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:18:55.881 19:33:21 -- common/autotest_common.sh@1370 -- # nb=26476544 00:18:55.881 19:33:21 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:18:55.881 19:33:21 -- common/autotest_common.sh@1374 -- # echo 103424 00:18:55.881 19:33:21 -- ftl/common.sh@48 -- # cache_size=5171 00:18:55.881 19:33:21 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:56.139 19:33:21 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:18:56.139 19:33:21 -- ftl/bdevperf.sh@26 -- # get_bdev_size f08faeb6-0a66-4c94-8a9d-d4212a4450b5 00:18:56.139 19:33:21 -- common/autotest_common.sh@1364 -- # local bdev_name=f08faeb6-0a66-4c94-8a9d-d4212a4450b5 00:18:56.139 19:33:21 -- common/autotest_common.sh@1365 -- # local bdev_info 00:18:56.139 19:33:21 -- common/autotest_common.sh@1366 -- # local bs 00:18:56.139 19:33:21 -- common/autotest_common.sh@1367 -- # local nb 00:18:56.139 19:33:21 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f08faeb6-0a66-4c94-8a9d-d4212a4450b5 00:18:56.397 19:33:21 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:18:56.397 { 00:18:56.397 "name": "f08faeb6-0a66-4c94-8a9d-d4212a4450b5", 00:18:56.397 "aliases": [ 00:18:56.397 "lvs/nvme0n1p0" 00:18:56.397 ], 00:18:56.397 "product_name": "Logical Volume", 00:18:56.397 "block_size": 4096, 00:18:56.397 "num_blocks": 26476544, 00:18:56.397 "uuid": "f08faeb6-0a66-4c94-8a9d-d4212a4450b5", 00:18:56.397 "assigned_rate_limits": { 00:18:56.397 "rw_ios_per_sec": 0, 00:18:56.397 "rw_mbytes_per_sec": 0, 00:18:56.397 "r_mbytes_per_sec": 0, 00:18:56.397 "w_mbytes_per_sec": 0 00:18:56.397 }, 00:18:56.397 "claimed": false, 00:18:56.397 "zoned": false, 00:18:56.397 "supported_io_types": { 00:18:56.397 "read": true, 00:18:56.397 "write": true, 00:18:56.397 "unmap": true, 00:18:56.397 "write_zeroes": true, 00:18:56.397 "flush": false, 00:18:56.397 "reset": true, 00:18:56.397 "compare": false, 00:18:56.397 "compare_and_write": false, 00:18:56.397 "abort": false, 00:18:56.397 "nvme_admin": false, 00:18:56.397 "nvme_io": false 00:18:56.397 }, 00:18:56.397 "driver_specific": { 00:18:56.397 "lvol": { 00:18:56.397 "lvol_store_uuid": "caf50340-a6c0-4f2c-87de-e4f64dbfc082", 00:18:56.397 "base_bdev": "nvme0n1", 00:18:56.397 "thin_provision": true, 00:18:56.397 "snapshot": false, 00:18:56.397 "clone": false, 00:18:56.397 "esnap_clone": false 00:18:56.397 } 00:18:56.397 } 00:18:56.397 } 00:18:56.397 ]' 00:18:56.397 19:33:21 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:18:56.397 19:33:21 -- common/autotest_common.sh@1369 -- # bs=4096 00:18:56.397 19:33:21 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:18:56.397 19:33:21 -- common/autotest_common.sh@1370 -- # nb=26476544 00:18:56.397 19:33:21 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:18:56.397 19:33:21 -- common/autotest_common.sh@1374 -- # echo 103424 00:18:56.397 19:33:21 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:18:56.397 19:33:21 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f08faeb6-0a66-4c94-8a9d-d4212a4450b5 -c nvc0n1p0 --l2p_dram_limit 20 00:18:56.656 [2024-04-24 19:33:22.201083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.656 [2024-04-24 19:33:22.201145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:56.656 [2024-04-24 19:33:22.201161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:56.656 [2024-04-24 19:33:22.201172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.656 [2024-04-24 19:33:22.201243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.656 [2024-04-24 19:33:22.201255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:56.656 [2024-04-24 19:33:22.201264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:56.656 [2024-04-24 19:33:22.201275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.656 [2024-04-24 19:33:22.201301] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:56.656 [2024-04-24 19:33:22.202700] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:56.656 [2024-04-24 19:33:22.202736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.656 [2024-04-24 19:33:22.202754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:56.656 [2024-04-24 19:33:22.202764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.450 ms 00:18:56.656 [2024-04-24 19:33:22.202775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.656 [2024-04-24 19:33:22.202852] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 87bbb78a-af37-4943-b502-8b4be3f21a60 00:18:56.656 [2024-04-24 19:33:22.204360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.657 [2024-04-24 19:33:22.204393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:56.657 [2024-04-24 19:33:22.204407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:56.657 [2024-04-24 19:33:22.204416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.657 [2024-04-24 19:33:22.212167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.657 [2024-04-24 19:33:22.212210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:56.657 [2024-04-24 19:33:22.212224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.712 ms 00:18:56.657 [2024-04-24 19:33:22.212232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.657 [2024-04-24 19:33:22.212342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.657 [2024-04-24 19:33:22.212382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:56.657 [2024-04-24 19:33:22.212394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:56.657 [2024-04-24 19:33:22.212403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.657 [2024-04-24 19:33:22.212483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.657 [2024-04-24 19:33:22.212499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:56.657 [2024-04-24 19:33:22.212514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:56.657 [2024-04-24 19:33:22.212522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.657 [2024-04-24 19:33:22.212550] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:56.657 [2024-04-24 19:33:22.219432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.657 [2024-04-24 19:33:22.219485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:56.657 [2024-04-24 19:33:22.219498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.905 ms 00:18:56.657 [2024-04-24 19:33:22.219510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.657 [2024-04-24 19:33:22.219551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.657 [2024-04-24 19:33:22.219563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:56.657 [2024-04-24 19:33:22.219573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:56.657 [2024-04-24 19:33:22.219583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.657 [2024-04-24 19:33:22.219650] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:56.657 [2024-04-24 19:33:22.219784] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:56.657 [2024-04-24 19:33:22.219802] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:56.657 [2024-04-24 19:33:22.219821] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:56.657 [2024-04-24 19:33:22.219833] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:56.657 [2024-04-24 19:33:22.219844] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:56.657 [2024-04-24 19:33:22.219854] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:56.657 [2024-04-24 19:33:22.219865] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:56.657 [2024-04-24 19:33:22.219876] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:56.657 [2024-04-24 19:33:22.219886] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:56.657 [2024-04-24 19:33:22.219896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.657 [2024-04-24 19:33:22.219908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:56.657 [2024-04-24 19:33:22.219918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:18:56.657 [2024-04-24 19:33:22.219928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.657 [2024-04-24 19:33:22.219991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.657 [2024-04-24 19:33:22.220002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:56.657 [2024-04-24 19:33:22.220011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:56.657 [2024-04-24 19:33:22.220022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.657 [2024-04-24 19:33:22.220099] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:56.657 [2024-04-24 19:33:22.220117] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:56.657 [2024-04-24 19:33:22.220126] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:56.657 [2024-04-24 19:33:22.220137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.657 [2024-04-24 19:33:22.220147] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:56.657 [2024-04-24 19:33:22.220156] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:56.657 [2024-04-24 19:33:22.220164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:56.657 [2024-04-24 19:33:22.220173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:56.657 [2024-04-24 19:33:22.220202] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:56.657 [2024-04-24 19:33:22.220214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:56.657 [2024-04-24 19:33:22.220222] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:56.657 [2024-04-24 19:33:22.220232] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:56.657 [2024-04-24 19:33:22.220239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:56.657 [2024-04-24 19:33:22.220250] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:56.657 [2024-04-24 19:33:22.220258] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:56.657 [2024-04-24 19:33:22.220267] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.657 [2024-04-24 19:33:22.220275] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:56.657 [2024-04-24 19:33:22.220287] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:56.657 [2024-04-24 19:33:22.220294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.657 [2024-04-24 19:33:22.220304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:56.657 [2024-04-24 19:33:22.220311] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:56.657 [2024-04-24 19:33:22.220321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:56.657 [2024-04-24 19:33:22.220329] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:56.657 [2024-04-24 19:33:22.220338] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:56.657 [2024-04-24 19:33:22.220345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:56.657 [2024-04-24 19:33:22.220355] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:56.657 [2024-04-24 19:33:22.220362] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:56.657 [2024-04-24 19:33:22.220371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:56.657 [2024-04-24 19:33:22.220379] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:56.657 [2024-04-24 19:33:22.220388] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:56.657 [2024-04-24 19:33:22.220395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:56.657 [2024-04-24 19:33:22.220404] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:56.657 [2024-04-24 19:33:22.220412] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:56.657 [2024-04-24 19:33:22.220423] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:56.657 [2024-04-24 19:33:22.220434] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:56.657 [2024-04-24 19:33:22.220444] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:56.657 [2024-04-24 19:33:22.220451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:56.657 [2024-04-24 19:33:22.220460] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:56.657 [2024-04-24 19:33:22.220468] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:56.657 [2024-04-24 19:33:22.220478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:56.657 [2024-04-24 19:33:22.220486] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:56.657 [2024-04-24 19:33:22.220498] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:56.657 [2024-04-24 19:33:22.220506] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:56.657 [2024-04-24 19:33:22.220517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:56.657 [2024-04-24 19:33:22.220526] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:56.657 [2024-04-24 19:33:22.220536] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:56.657 [2024-04-24 19:33:22.220544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:56.657 [2024-04-24 19:33:22.220553] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:56.657 [2024-04-24 19:33:22.220561] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:56.657 [2024-04-24 19:33:22.220573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:56.657 [2024-04-24 19:33:22.220582] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:56.657 [2024-04-24 19:33:22.220598] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:56.657 [2024-04-24 19:33:22.220607] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:56.657 [2024-04-24 19:33:22.220618] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:56.657 [2024-04-24 19:33:22.220626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:56.657 [2024-04-24 19:33:22.220646] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:56.657 [2024-04-24 19:33:22.220655] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:56.657 [2024-04-24 19:33:22.220665] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:56.658 [2024-04-24 19:33:22.220674] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:56.658 [2024-04-24 19:33:22.220685] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:56.658 [2024-04-24 19:33:22.220693] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:56.658 [2024-04-24 19:33:22.220703] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:56.658 [2024-04-24 19:33:22.220711] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:56.658 [2024-04-24 19:33:22.220721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:56.658 [2024-04-24 19:33:22.220730] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:56.658 [2024-04-24 19:33:22.220744] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:56.658 [2024-04-24 19:33:22.220755] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:56.658 [2024-04-24 19:33:22.220766] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:56.658 [2024-04-24 19:33:22.220775] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:56.658 [2024-04-24 19:33:22.220785] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:56.658 [2024-04-24 19:33:22.220794] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:56.658 [2024-04-24 19:33:22.220805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.658 [2024-04-24 19:33:22.220815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:56.658 [2024-04-24 19:33:22.220827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:18:56.658 [2024-04-24 19:33:22.220836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.658 [2024-04-24 19:33:22.249090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.658 [2024-04-24 19:33:22.249148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:56.658 [2024-04-24 19:33:22.249164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.256 ms 00:18:56.658 [2024-04-24 19:33:22.249172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.658 [2024-04-24 19:33:22.249282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.658 [2024-04-24 19:33:22.249297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:56.658 [2024-04-24 19:33:22.249310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:56.658 [2024-04-24 19:33:22.249321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.658 [2024-04-24 19:33:22.324161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.658 [2024-04-24 19:33:22.324229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:56.658 [2024-04-24 19:33:22.324247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.911 ms 00:18:56.658 [2024-04-24 19:33:22.324257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.658 [2024-04-24 19:33:22.324314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.658 [2024-04-24 19:33:22.324327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:56.658 [2024-04-24 19:33:22.324338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:56.658 [2024-04-24 19:33:22.324347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.658 [2024-04-24 19:33:22.324881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.658 [2024-04-24 19:33:22.324904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:56.658 [2024-04-24 19:33:22.324917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:18:56.658 [2024-04-24 19:33:22.324925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.658 [2024-04-24 19:33:22.325054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.658 [2024-04-24 19:33:22.325076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:56.658 [2024-04-24 19:33:22.325093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:18:56.658 [2024-04-24 19:33:22.325103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.917 [2024-04-24 19:33:22.351562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.918 [2024-04-24 19:33:22.351626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:56.918 [2024-04-24 19:33:22.351651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.483 ms 00:18:56.918 [2024-04-24 19:33:22.351661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.918 [2024-04-24 19:33:22.368798] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:56.918 [2024-04-24 19:33:22.375326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.918 [2024-04-24 19:33:22.375387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:56.918 [2024-04-24 19:33:22.375402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.584 ms 00:18:56.918 [2024-04-24 19:33:22.375430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.918 [2024-04-24 19:33:22.466970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:56.918 [2024-04-24 19:33:22.467050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:56.918 [2024-04-24 19:33:22.467066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 91.667 ms 00:18:56.918 [2024-04-24 19:33:22.467077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:56.918 [2024-04-24 19:33:22.467171] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:56.918 [2024-04-24 19:33:22.467191] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:19:00.223 [2024-04-24 19:33:25.609430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.223 [2024-04-24 19:33:25.609502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:00.223 [2024-04-24 19:33:25.609518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3148.312 ms 00:19:00.223 [2024-04-24 19:33:25.609532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.223 [2024-04-24 19:33:25.609779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.223 [2024-04-24 19:33:25.609798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:00.223 [2024-04-24 19:33:25.609807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:19:00.223 [2024-04-24 19:33:25.609818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.223 [2024-04-24 19:33:25.655193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.223 [2024-04-24 19:33:25.655257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:00.223 [2024-04-24 19:33:25.655275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.398 ms 00:19:00.223 [2024-04-24 19:33:25.655286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.223 [2024-04-24 19:33:25.699327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.223 [2024-04-24 19:33:25.699391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:00.223 [2024-04-24 19:33:25.699406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.043 ms 00:19:00.223 [2024-04-24 19:33:25.699437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.223 [2024-04-24 19:33:25.699919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.223 [2024-04-24 19:33:25.699955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:00.223 [2024-04-24 19:33:25.699969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:19:00.223 [2024-04-24 19:33:25.699980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.223 [2024-04-24 19:33:25.805692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.223 [2024-04-24 19:33:25.805775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:00.223 [2024-04-24 19:33:25.805790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.848 ms 00:19:00.223 [2024-04-24 19:33:25.805800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.223 [2024-04-24 19:33:25.850485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.223 [2024-04-24 19:33:25.850551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:00.223 [2024-04-24 19:33:25.850566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.691 ms 00:19:00.223 [2024-04-24 19:33:25.850575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.223 [2024-04-24 19:33:25.852370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.223 [2024-04-24 19:33:25.852403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:00.223 [2024-04-24 19:33:25.852414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.744 ms 00:19:00.223 [2024-04-24 19:33:25.852428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.223 [2024-04-24 19:33:25.894157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.223 [2024-04-24 19:33:25.894215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:00.223 [2024-04-24 19:33:25.894228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.747 ms 00:19:00.223 [2024-04-24 19:33:25.894238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.223 [2024-04-24 19:33:25.894291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.223 [2024-04-24 19:33:25.894309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:00.223 [2024-04-24 19:33:25.894319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:00.223 [2024-04-24 19:33:25.894328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.223 [2024-04-24 19:33:25.894422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:00.223 [2024-04-24 19:33:25.894455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:00.223 [2024-04-24 19:33:25.894464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:00.223 [2024-04-24 19:33:25.894473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:00.223 [2024-04-24 19:33:25.895579] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3701.132 ms, result 0 00:19:00.482 { 00:19:00.482 "name": "ftl0", 00:19:00.482 "uuid": "87bbb78a-af37-4943-b502-8b4be3f21a60" 00:19:00.482 } 00:19:00.482 19:33:25 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:19:00.482 19:33:25 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:00.482 19:33:25 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:19:00.482 19:33:26 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:00.741 [2024-04-24 19:33:26.215610] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:00.741 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:00.741 Zero copy mechanism will not be used. 00:19:00.741 Running I/O for 4 seconds... 00:19:04.934 00:19:04.934 Latency(us) 00:19:04.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.934 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:04.934 ftl0 : 4.00 2120.99 140.85 0.00 0.00 496.07 221.79 4349.99 00:19:04.934 =================================================================================================================== 00:19:04.934 Total : 2120.99 140.85 0.00 0.00 496.07 221.79 4349.99 00:19:04.934 [2024-04-24 19:33:30.219302] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:04.934 0 00:19:04.934 19:33:30 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:04.934 [2024-04-24 19:33:30.351422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:04.934 Running I/O for 4 seconds... 00:19:09.125 00:19:09.125 Latency(us) 00:19:09.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.125 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:09.125 ftl0 : 4.01 8241.70 32.19 0.00 0.00 15498.04 289.76 45331.45 00:19:09.125 =================================================================================================================== 00:19:09.125 Total : 8241.70 32.19 0.00 0.00 15498.04 0.00 45331.45 00:19:09.125 0 00:19:09.125 [2024-04-24 19:33:34.370786] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:09.125 19:33:34 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:09.125 [2024-04-24 19:33:34.499578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:09.125 Running I/O for 4 seconds... 00:19:13.315 00:19:13.316 Latency(us) 00:19:13.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.316 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:13.316 Verification LBA range: start 0x0 length 0x1400000 00:19:13.316 ftl0 : 4.01 7805.58 30.49 0.00 0.00 16346.63 293.34 40981.46 00:19:13.316 =================================================================================================================== 00:19:13.316 Total : 7805.58 30.49 0.00 0.00 16346.63 0.00 40981.46 00:19:13.316 0 00:19:13.316 [2024-04-24 19:33:38.522530] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:13.316 19:33:38 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:13.316 [2024-04-24 19:33:38.745516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.316 [2024-04-24 19:33:38.745575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:13.316 [2024-04-24 19:33:38.745591] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:13.316 [2024-04-24 19:33:38.745602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.316 [2024-04-24 19:33:38.745626] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:13.316 [2024-04-24 19:33:38.750094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.316 [2024-04-24 19:33:38.750132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:13.316 [2024-04-24 19:33:38.750146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.444 ms 00:19:13.316 [2024-04-24 19:33:38.750156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.316 [2024-04-24 19:33:38.751754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.316 [2024-04-24 19:33:38.751795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:13.316 [2024-04-24 19:33:38.751809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.564 ms 00:19:13.316 [2024-04-24 19:33:38.751818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.316 [2024-04-24 19:33:38.965582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.316 [2024-04-24 19:33:38.965671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:13.316 [2024-04-24 19:33:38.965695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 214.142 ms 00:19:13.316 [2024-04-24 19:33:38.965704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.316 [2024-04-24 19:33:38.971845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.316 [2024-04-24 19:33:38.971890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:13.316 [2024-04-24 19:33:38.971904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.106 ms 00:19:13.316 [2024-04-24 19:33:38.971913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.576 [2024-04-24 19:33:39.020312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.576 [2024-04-24 19:33:39.020378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:13.576 [2024-04-24 19:33:39.020406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.385 ms 00:19:13.576 [2024-04-24 19:33:39.020415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.576 [2024-04-24 19:33:39.048048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.576 [2024-04-24 19:33:39.048113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:13.576 [2024-04-24 19:33:39.048132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.609 ms 00:19:13.576 [2024-04-24 19:33:39.048142] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.576 [2024-04-24 19:33:39.048330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.576 [2024-04-24 19:33:39.048343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:13.576 [2024-04-24 19:33:39.048356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:19:13.576 [2024-04-24 19:33:39.048364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.576 [2024-04-24 19:33:39.096287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.576 [2024-04-24 19:33:39.096355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:13.576 [2024-04-24 19:33:39.096373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.988 ms 00:19:13.576 [2024-04-24 19:33:39.096383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.576 [2024-04-24 19:33:39.143760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.576 [2024-04-24 19:33:39.143824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:13.576 [2024-04-24 19:33:39.143841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.379 ms 00:19:13.576 [2024-04-24 19:33:39.143850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.576 [2024-04-24 19:33:39.193342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.576 [2024-04-24 19:33:39.193406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:13.576 [2024-04-24 19:33:39.193424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.498 ms 00:19:13.576 [2024-04-24 19:33:39.193433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.576 [2024-04-24 19:33:39.242236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.576 [2024-04-24 19:33:39.242301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:13.576 [2024-04-24 19:33:39.242320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.740 ms 00:19:13.576 [2024-04-24 19:33:39.242329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.576 [2024-04-24 19:33:39.242406] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:13.576 [2024-04-24 19:33:39.242425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:13.576 [2024-04-24 19:33:39.242647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.242994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:13.577 [2024-04-24 19:33:39.243502] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:13.577 [2024-04-24 19:33:39.243545] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 87bbb78a-af37-4943-b502-8b4be3f21a60 00:19:13.577 [2024-04-24 19:33:39.243555] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:13.577 [2024-04-24 19:33:39.243567] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:13.577 [2024-04-24 19:33:39.243575] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:13.577 [2024-04-24 19:33:39.243586] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:13.577 [2024-04-24 19:33:39.243594] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:13.577 [2024-04-24 19:33:39.243609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:13.577 [2024-04-24 19:33:39.243618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:13.577 [2024-04-24 19:33:39.243627] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:13.577 [2024-04-24 19:33:39.243644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:13.577 [2024-04-24 19:33:39.243655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.577 [2024-04-24 19:33:39.243664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:13.577 [2024-04-24 19:33:39.243677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:19:13.577 [2024-04-24 19:33:39.243685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.837 [2024-04-24 19:33:39.268513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.837 [2024-04-24 19:33:39.268569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:13.837 [2024-04-24 19:33:39.268586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.795 ms 00:19:13.837 [2024-04-24 19:33:39.268597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.837 [2024-04-24 19:33:39.268963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:13.837 [2024-04-24 19:33:39.268979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:13.837 [2024-04-24 19:33:39.268990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:19:13.837 [2024-04-24 19:33:39.268999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.837 [2024-04-24 19:33:39.338841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.837 [2024-04-24 19:33:39.338902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:13.837 [2024-04-24 19:33:39.338921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.837 [2024-04-24 19:33:39.338930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.837 [2024-04-24 19:33:39.339005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.837 [2024-04-24 19:33:39.339014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:13.837 [2024-04-24 19:33:39.339025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.837 [2024-04-24 19:33:39.339035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.837 [2024-04-24 19:33:39.339149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.837 [2024-04-24 19:33:39.339164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:13.837 [2024-04-24 19:33:39.339176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.837 [2024-04-24 19:33:39.339186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.837 [2024-04-24 19:33:39.339212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.837 [2024-04-24 19:33:39.339220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:13.837 [2024-04-24 19:33:39.339231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.837 [2024-04-24 19:33:39.339239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:13.837 [2024-04-24 19:33:39.484349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:13.837 [2024-04-24 19:33:39.484411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:13.837 [2024-04-24 19:33:39.484428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:13.837 [2024-04-24 19:33:39.484441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.096 [2024-04-24 19:33:39.539419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.096 [2024-04-24 19:33:39.539482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.096 [2024-04-24 19:33:39.539497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.096 [2024-04-24 19:33:39.539506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.096 [2024-04-24 19:33:39.539601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.096 [2024-04-24 19:33:39.539612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:14.096 [2024-04-24 19:33:39.539623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.096 [2024-04-24 19:33:39.539652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.096 [2024-04-24 19:33:39.539706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.096 [2024-04-24 19:33:39.539717] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:14.096 [2024-04-24 19:33:39.539728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.096 [2024-04-24 19:33:39.539737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.096 [2024-04-24 19:33:39.539881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.096 [2024-04-24 19:33:39.539894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:14.096 [2024-04-24 19:33:39.539905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.096 [2024-04-24 19:33:39.539914] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.096 [2024-04-24 19:33:39.539960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.096 [2024-04-24 19:33:39.539974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:14.096 [2024-04-24 19:33:39.539985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.096 [2024-04-24 19:33:39.539995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.096 [2024-04-24 19:33:39.540036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.096 [2024-04-24 19:33:39.540051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:14.096 [2024-04-24 19:33:39.540062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.096 [2024-04-24 19:33:39.540071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.096 [2024-04-24 19:33:39.540125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:14.096 [2024-04-24 19:33:39.540135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:14.096 [2024-04-24 19:33:39.540146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:14.096 [2024-04-24 19:33:39.540155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.096 [2024-04-24 19:33:39.540287] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 796.270 ms, result 0 00:19:14.096 true 00:19:14.096 19:33:39 -- ftl/bdevperf.sh@37 -- # killprocess 78652 00:19:14.096 19:33:39 -- common/autotest_common.sh@936 -- # '[' -z 78652 ']' 00:19:14.096 19:33:39 -- common/autotest_common.sh@940 -- # kill -0 78652 00:19:14.096 19:33:39 -- common/autotest_common.sh@941 -- # uname 00:19:14.096 19:33:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:14.096 19:33:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78652 00:19:14.096 19:33:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:14.096 19:33:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:14.096 19:33:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78652' 00:19:14.096 killing process with pid 78652 00:19:14.096 19:33:39 -- common/autotest_common.sh@955 -- # kill 78652 00:19:14.096 Received shutdown signal, test time was about 4.000000 seconds 00:19:14.096 00:19:14.096 Latency(us) 00:19:14.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.096 =================================================================================================================== 00:19:14.096 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:14.096 19:33:39 -- common/autotest_common.sh@960 -- # wait 78652 00:19:22.235 19:33:47 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:19:22.235 19:33:47 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:19:22.235 19:33:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:22.235 19:33:47 -- common/autotest_common.sh@10 -- # set +x 00:19:22.235 19:33:47 -- ftl/bdevperf.sh@41 -- # remove_shm 00:19:22.235 19:33:47 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:22.235 Remove shared memory files 00:19:22.235 19:33:47 -- ftl/common.sh@205 -- # rm -f rm -f 00:19:22.235 19:33:47 -- ftl/common.sh@206 -- # rm -f rm -f 00:19:22.235 19:33:47 -- ftl/common.sh@207 -- # rm -f rm -f 00:19:22.235 19:33:47 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:22.235 19:33:47 -- ftl/common.sh@209 -- # rm -f rm -f 00:19:22.235 ************************************ 00:19:22.235 END TEST ftl_bdevperf 00:19:22.235 ************************************ 00:19:22.235 00:19:22.235 real 0m29.792s 00:19:22.235 user 0m32.538s 00:19:22.235 sys 0m1.233s 00:19:22.235 19:33:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:22.235 19:33:47 -- common/autotest_common.sh@10 -- # set +x 00:19:22.235 19:33:47 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:22.235 19:33:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:22.235 19:33:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:22.235 19:33:47 -- common/autotest_common.sh@10 -- # set +x 00:19:22.235 ************************************ 00:19:22.235 START TEST ftl_trim 00:19:22.235 ************************************ 00:19:22.235 19:33:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:22.235 * Looking for test storage... 00:19:22.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:22.235 19:33:47 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:22.235 19:33:47 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:22.235 19:33:47 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:22.235 19:33:47 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:22.235 19:33:47 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:22.235 19:33:47 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:22.235 19:33:47 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:22.235 19:33:47 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:22.235 19:33:47 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:22.235 19:33:47 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:22.235 19:33:47 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:22.235 19:33:47 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:22.235 19:33:47 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:22.235 19:33:47 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:22.235 19:33:47 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:22.235 19:33:47 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:22.235 19:33:47 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:22.235 19:33:47 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:22.235 19:33:47 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:22.235 19:33:47 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:22.235 19:33:47 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:22.235 19:33:47 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:22.235 19:33:47 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:22.235 19:33:47 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:22.235 19:33:47 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:22.235 19:33:47 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:22.235 19:33:47 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:22.235 19:33:47 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:22.235 19:33:47 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:22.235 19:33:47 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:22.235 19:33:47 -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:22.235 19:33:47 -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:22.235 19:33:47 -- ftl/trim.sh@25 -- # timeout=240 00:19:22.235 19:33:47 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:22.235 19:33:47 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:22.235 19:33:47 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:22.235 19:33:47 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:22.235 19:33:47 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:22.235 19:33:47 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:22.235 19:33:47 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:22.235 19:33:47 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:22.235 19:33:47 -- ftl/trim.sh@40 -- # svcpid=79106 00:19:22.235 19:33:47 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:22.235 19:33:47 -- ftl/trim.sh@41 -- # waitforlisten 79106 00:19:22.236 19:33:47 -- common/autotest_common.sh@817 -- # '[' -z 79106 ']' 00:19:22.236 19:33:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.236 19:33:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:22.236 19:33:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.236 19:33:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:22.236 19:33:47 -- common/autotest_common.sh@10 -- # set +x 00:19:22.495 [2024-04-24 19:33:48.025702] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:19:22.495 [2024-04-24 19:33:48.026661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79106 ] 00:19:22.755 [2024-04-24 19:33:48.213107] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:23.012 [2024-04-24 19:33:48.512917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.012 [2024-04-24 19:33:48.513061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.012 [2024-04-24 19:33:48.513084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.386 19:33:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:24.386 19:33:49 -- common/autotest_common.sh@850 -- # return 0 00:19:24.386 19:33:49 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:24.386 19:33:49 -- ftl/common.sh@54 -- # local name=nvme0 00:19:24.386 19:33:49 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:24.386 19:33:49 -- ftl/common.sh@56 -- # local size=103424 00:19:24.386 19:33:49 -- ftl/common.sh@59 -- # local base_bdev 00:19:24.386 19:33:49 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:24.386 19:33:49 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:24.386 19:33:49 -- ftl/common.sh@62 -- # local base_size 00:19:24.386 19:33:49 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:24.386 19:33:49 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:19:24.386 19:33:49 -- common/autotest_common.sh@1365 -- # local bdev_info 00:19:24.386 19:33:49 -- common/autotest_common.sh@1366 -- # local bs 00:19:24.386 19:33:49 -- common/autotest_common.sh@1367 -- # local nb 00:19:24.386 19:33:49 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:24.643 19:33:50 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:19:24.643 { 00:19:24.643 "name": "nvme0n1", 00:19:24.643 "aliases": [ 00:19:24.643 "d07f860a-9437-4fdf-8226-58736874b919" 00:19:24.643 ], 00:19:24.643 "product_name": "NVMe disk", 00:19:24.643 "block_size": 4096, 00:19:24.643 "num_blocks": 1310720, 00:19:24.643 "uuid": "d07f860a-9437-4fdf-8226-58736874b919", 00:19:24.643 "assigned_rate_limits": { 00:19:24.643 "rw_ios_per_sec": 0, 00:19:24.643 "rw_mbytes_per_sec": 0, 00:19:24.643 "r_mbytes_per_sec": 0, 00:19:24.643 "w_mbytes_per_sec": 0 00:19:24.643 }, 00:19:24.643 "claimed": true, 00:19:24.643 "claim_type": "read_many_write_one", 00:19:24.643 "zoned": false, 00:19:24.643 "supported_io_types": { 00:19:24.643 "read": true, 00:19:24.643 "write": true, 00:19:24.643 "unmap": true, 00:19:24.643 "write_zeroes": true, 00:19:24.643 "flush": true, 00:19:24.643 "reset": true, 00:19:24.644 "compare": true, 00:19:24.644 "compare_and_write": false, 00:19:24.644 "abort": true, 00:19:24.644 "nvme_admin": true, 00:19:24.644 "nvme_io": true 00:19:24.644 }, 00:19:24.644 "driver_specific": { 00:19:24.644 "nvme": [ 00:19:24.644 { 00:19:24.644 "pci_address": "0000:00:11.0", 00:19:24.644 "trid": { 00:19:24.644 "trtype": "PCIe", 00:19:24.644 "traddr": "0000:00:11.0" 00:19:24.644 }, 00:19:24.644 "ctrlr_data": { 00:19:24.644 "cntlid": 0, 00:19:24.644 "vendor_id": "0x1b36", 00:19:24.644 "model_number": "QEMU NVMe Ctrl", 00:19:24.644 "serial_number": "12341", 00:19:24.644 "firmware_revision": "8.0.0", 00:19:24.644 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:24.644 "oacs": { 00:19:24.644 "security": 0, 00:19:24.644 "format": 1, 00:19:24.644 "firmware": 0, 00:19:24.644 "ns_manage": 1 00:19:24.644 }, 00:19:24.644 "multi_ctrlr": false, 00:19:24.644 "ana_reporting": false 00:19:24.644 }, 00:19:24.644 "vs": { 00:19:24.644 "nvme_version": "1.4" 00:19:24.644 }, 00:19:24.644 "ns_data": { 00:19:24.644 "id": 1, 00:19:24.644 "can_share": false 00:19:24.644 } 00:19:24.644 } 00:19:24.644 ], 00:19:24.644 "mp_policy": "active_passive" 00:19:24.644 } 00:19:24.644 } 00:19:24.644 ]' 00:19:24.644 19:33:50 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:19:24.644 19:33:50 -- common/autotest_common.sh@1369 -- # bs=4096 00:19:24.644 19:33:50 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:19:24.902 19:33:50 -- common/autotest_common.sh@1370 -- # nb=1310720 00:19:24.902 19:33:50 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:19:24.902 19:33:50 -- common/autotest_common.sh@1374 -- # echo 5120 00:19:24.902 19:33:50 -- ftl/common.sh@63 -- # base_size=5120 00:19:24.902 19:33:50 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:24.902 19:33:50 -- ftl/common.sh@67 -- # clear_lvols 00:19:24.902 19:33:50 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:24.902 19:33:50 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:25.161 19:33:50 -- ftl/common.sh@28 -- # stores=caf50340-a6c0-4f2c-87de-e4f64dbfc082 00:19:25.161 19:33:50 -- ftl/common.sh@29 -- # for lvs in $stores 00:19:25.161 19:33:50 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u caf50340-a6c0-4f2c-87de-e4f64dbfc082 00:19:25.419 19:33:50 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:25.419 19:33:51 -- ftl/common.sh@68 -- # lvs=9fc7b2a3-b1cb-4972-8078-0496be1cc73c 00:19:25.420 19:33:51 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9fc7b2a3-b1cb-4972-8078-0496be1cc73c 00:19:25.678 19:33:51 -- ftl/trim.sh@43 -- # split_bdev=3f104183-b30e-426c-9570-945ea04cfd4e 00:19:25.678 19:33:51 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3f104183-b30e-426c-9570-945ea04cfd4e 00:19:25.678 19:33:51 -- ftl/common.sh@35 -- # local name=nvc0 00:19:25.678 19:33:51 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:25.678 19:33:51 -- ftl/common.sh@37 -- # local base_bdev=3f104183-b30e-426c-9570-945ea04cfd4e 00:19:25.678 19:33:51 -- ftl/common.sh@38 -- # local cache_size= 00:19:25.678 19:33:51 -- ftl/common.sh@41 -- # get_bdev_size 3f104183-b30e-426c-9570-945ea04cfd4e 00:19:25.678 19:33:51 -- common/autotest_common.sh@1364 -- # local bdev_name=3f104183-b30e-426c-9570-945ea04cfd4e 00:19:25.678 19:33:51 -- common/autotest_common.sh@1365 -- # local bdev_info 00:19:25.678 19:33:51 -- common/autotest_common.sh@1366 -- # local bs 00:19:25.678 19:33:51 -- common/autotest_common.sh@1367 -- # local nb 00:19:25.678 19:33:51 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f104183-b30e-426c-9570-945ea04cfd4e 00:19:25.936 19:33:51 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:19:25.936 { 00:19:25.936 "name": "3f104183-b30e-426c-9570-945ea04cfd4e", 00:19:25.936 "aliases": [ 00:19:25.936 "lvs/nvme0n1p0" 00:19:25.936 ], 00:19:25.936 "product_name": "Logical Volume", 00:19:25.936 "block_size": 4096, 00:19:25.936 "num_blocks": 26476544, 00:19:25.936 "uuid": "3f104183-b30e-426c-9570-945ea04cfd4e", 00:19:25.936 "assigned_rate_limits": { 00:19:25.936 "rw_ios_per_sec": 0, 00:19:25.936 "rw_mbytes_per_sec": 0, 00:19:25.936 "r_mbytes_per_sec": 0, 00:19:25.936 "w_mbytes_per_sec": 0 00:19:25.936 }, 00:19:25.936 "claimed": false, 00:19:25.936 "zoned": false, 00:19:25.936 "supported_io_types": { 00:19:25.936 "read": true, 00:19:25.936 "write": true, 00:19:25.936 "unmap": true, 00:19:25.936 "write_zeroes": true, 00:19:25.936 "flush": false, 00:19:25.936 "reset": true, 00:19:25.936 "compare": false, 00:19:25.936 "compare_and_write": false, 00:19:25.936 "abort": false, 00:19:25.936 "nvme_admin": false, 00:19:25.936 "nvme_io": false 00:19:25.936 }, 00:19:25.936 "driver_specific": { 00:19:25.936 "lvol": { 00:19:25.936 "lvol_store_uuid": "9fc7b2a3-b1cb-4972-8078-0496be1cc73c", 00:19:25.936 "base_bdev": "nvme0n1", 00:19:25.936 "thin_provision": true, 00:19:25.936 "snapshot": false, 00:19:25.936 "clone": false, 00:19:25.936 "esnap_clone": false 00:19:25.936 } 00:19:25.936 } 00:19:25.936 } 00:19:25.936 ]' 00:19:25.936 19:33:51 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:19:25.936 19:33:51 -- common/autotest_common.sh@1369 -- # bs=4096 00:19:25.936 19:33:51 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:19:26.195 19:33:51 -- common/autotest_common.sh@1370 -- # nb=26476544 00:19:26.195 19:33:51 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:19:26.195 19:33:51 -- common/autotest_common.sh@1374 -- # echo 103424 00:19:26.195 19:33:51 -- ftl/common.sh@41 -- # local base_size=5171 00:19:26.195 19:33:51 -- ftl/common.sh@44 -- # local nvc_bdev 00:19:26.195 19:33:51 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:26.453 19:33:51 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:26.453 19:33:51 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:26.453 19:33:51 -- ftl/common.sh@48 -- # get_bdev_size 3f104183-b30e-426c-9570-945ea04cfd4e 00:19:26.453 19:33:51 -- common/autotest_common.sh@1364 -- # local bdev_name=3f104183-b30e-426c-9570-945ea04cfd4e 00:19:26.453 19:33:51 -- common/autotest_common.sh@1365 -- # local bdev_info 00:19:26.453 19:33:51 -- common/autotest_common.sh@1366 -- # local bs 00:19:26.453 19:33:51 -- common/autotest_common.sh@1367 -- # local nb 00:19:26.453 19:33:51 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f104183-b30e-426c-9570-945ea04cfd4e 00:19:26.711 19:33:52 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:19:26.711 { 00:19:26.711 "name": "3f104183-b30e-426c-9570-945ea04cfd4e", 00:19:26.711 "aliases": [ 00:19:26.711 "lvs/nvme0n1p0" 00:19:26.711 ], 00:19:26.711 "product_name": "Logical Volume", 00:19:26.711 "block_size": 4096, 00:19:26.711 "num_blocks": 26476544, 00:19:26.711 "uuid": "3f104183-b30e-426c-9570-945ea04cfd4e", 00:19:26.711 "assigned_rate_limits": { 00:19:26.711 "rw_ios_per_sec": 0, 00:19:26.711 "rw_mbytes_per_sec": 0, 00:19:26.711 "r_mbytes_per_sec": 0, 00:19:26.711 "w_mbytes_per_sec": 0 00:19:26.711 }, 00:19:26.711 "claimed": false, 00:19:26.711 "zoned": false, 00:19:26.711 "supported_io_types": { 00:19:26.711 "read": true, 00:19:26.711 "write": true, 00:19:26.711 "unmap": true, 00:19:26.711 "write_zeroes": true, 00:19:26.711 "flush": false, 00:19:26.711 "reset": true, 00:19:26.711 "compare": false, 00:19:26.711 "compare_and_write": false, 00:19:26.711 "abort": false, 00:19:26.711 "nvme_admin": false, 00:19:26.711 "nvme_io": false 00:19:26.711 }, 00:19:26.711 "driver_specific": { 00:19:26.711 "lvol": { 00:19:26.711 "lvol_store_uuid": "9fc7b2a3-b1cb-4972-8078-0496be1cc73c", 00:19:26.711 "base_bdev": "nvme0n1", 00:19:26.711 "thin_provision": true, 00:19:26.711 "snapshot": false, 00:19:26.711 "clone": false, 00:19:26.711 "esnap_clone": false 00:19:26.711 } 00:19:26.711 } 00:19:26.711 } 00:19:26.711 ]' 00:19:26.711 19:33:52 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:19:26.711 19:33:52 -- common/autotest_common.sh@1369 -- # bs=4096 00:19:26.711 19:33:52 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:19:26.711 19:33:52 -- common/autotest_common.sh@1370 -- # nb=26476544 00:19:26.711 19:33:52 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:19:26.711 19:33:52 -- common/autotest_common.sh@1374 -- # echo 103424 00:19:26.711 19:33:52 -- ftl/common.sh@48 -- # cache_size=5171 00:19:26.711 19:33:52 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:26.969 19:33:52 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:26.969 19:33:52 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:26.969 19:33:52 -- ftl/trim.sh@47 -- # get_bdev_size 3f104183-b30e-426c-9570-945ea04cfd4e 00:19:26.969 19:33:52 -- common/autotest_common.sh@1364 -- # local bdev_name=3f104183-b30e-426c-9570-945ea04cfd4e 00:19:26.969 19:33:52 -- common/autotest_common.sh@1365 -- # local bdev_info 00:19:26.969 19:33:52 -- common/autotest_common.sh@1366 -- # local bs 00:19:26.969 19:33:52 -- common/autotest_common.sh@1367 -- # local nb 00:19:26.969 19:33:52 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3f104183-b30e-426c-9570-945ea04cfd4e 00:19:27.264 19:33:52 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:19:27.264 { 00:19:27.264 "name": "3f104183-b30e-426c-9570-945ea04cfd4e", 00:19:27.264 "aliases": [ 00:19:27.264 "lvs/nvme0n1p0" 00:19:27.264 ], 00:19:27.264 "product_name": "Logical Volume", 00:19:27.264 "block_size": 4096, 00:19:27.264 "num_blocks": 26476544, 00:19:27.264 "uuid": "3f104183-b30e-426c-9570-945ea04cfd4e", 00:19:27.264 "assigned_rate_limits": { 00:19:27.264 "rw_ios_per_sec": 0, 00:19:27.264 "rw_mbytes_per_sec": 0, 00:19:27.264 "r_mbytes_per_sec": 0, 00:19:27.264 "w_mbytes_per_sec": 0 00:19:27.264 }, 00:19:27.264 "claimed": false, 00:19:27.264 "zoned": false, 00:19:27.264 "supported_io_types": { 00:19:27.264 "read": true, 00:19:27.264 "write": true, 00:19:27.264 "unmap": true, 00:19:27.264 "write_zeroes": true, 00:19:27.264 "flush": false, 00:19:27.264 "reset": true, 00:19:27.264 "compare": false, 00:19:27.264 "compare_and_write": false, 00:19:27.264 "abort": false, 00:19:27.264 "nvme_admin": false, 00:19:27.264 "nvme_io": false 00:19:27.264 }, 00:19:27.264 "driver_specific": { 00:19:27.264 "lvol": { 00:19:27.264 "lvol_store_uuid": "9fc7b2a3-b1cb-4972-8078-0496be1cc73c", 00:19:27.264 "base_bdev": "nvme0n1", 00:19:27.264 "thin_provision": true, 00:19:27.264 "snapshot": false, 00:19:27.264 "clone": false, 00:19:27.264 "esnap_clone": false 00:19:27.264 } 00:19:27.264 } 00:19:27.264 } 00:19:27.264 ]' 00:19:27.264 19:33:52 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:19:27.264 19:33:52 -- common/autotest_common.sh@1369 -- # bs=4096 00:19:27.264 19:33:52 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:19:27.264 19:33:52 -- common/autotest_common.sh@1370 -- # nb=26476544 00:19:27.264 19:33:52 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:19:27.264 19:33:52 -- common/autotest_common.sh@1374 -- # echo 103424 00:19:27.264 19:33:52 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:27.264 19:33:52 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3f104183-b30e-426c-9570-945ea04cfd4e -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:27.524 [2024-04-24 19:33:52.959180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.524 [2024-04-24 19:33:52.959243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:27.524 [2024-04-24 19:33:52.959264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:27.524 [2024-04-24 19:33:52.959274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.524 [2024-04-24 19:33:52.963059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.524 [2024-04-24 19:33:52.963113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.524 [2024-04-24 19:33:52.963129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.750 ms 00:19:27.524 [2024-04-24 19:33:52.963139] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.524 [2024-04-24 19:33:52.963343] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:27.524 [2024-04-24 19:33:52.964768] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:27.524 [2024-04-24 19:33:52.964805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.524 [2024-04-24 19:33:52.964816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.524 [2024-04-24 19:33:52.964831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.482 ms 00:19:27.524 [2024-04-24 19:33:52.964842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.524 [2024-04-24 19:33:52.964988] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bc9fe5a4-cec1-4dff-a75f-c7daf97221d9 00:19:27.524 [2024-04-24 19:33:52.966573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.524 [2024-04-24 19:33:52.966614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:27.524 [2024-04-24 19:33:52.966627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:27.524 [2024-04-24 19:33:52.966648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.524 [2024-04-24 19:33:52.974874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.524 [2024-04-24 19:33:52.974920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.524 [2024-04-24 19:33:52.974933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.141 ms 00:19:27.524 [2024-04-24 19:33:52.974948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.524 [2024-04-24 19:33:52.975135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.524 [2024-04-24 19:33:52.975168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.524 [2024-04-24 19:33:52.975179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:27.524 [2024-04-24 19:33:52.975194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.524 [2024-04-24 19:33:52.975248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.524 [2024-04-24 19:33:52.975263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:27.524 [2024-04-24 19:33:52.975272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:27.524 [2024-04-24 19:33:52.975282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.524 [2024-04-24 19:33:52.975322] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:27.524 [2024-04-24 19:33:52.982728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.524 [2024-04-24 19:33:52.982777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.524 [2024-04-24 19:33:52.982791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.423 ms 00:19:27.524 [2024-04-24 19:33:52.982802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.524 [2024-04-24 19:33:52.982909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.524 [2024-04-24 19:33:52.982922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:27.524 [2024-04-24 19:33:52.982933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:27.524 [2024-04-24 19:33:52.982972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.524 [2024-04-24 19:33:52.983017] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:27.524 [2024-04-24 19:33:52.983149] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:27.524 [2024-04-24 19:33:52.983165] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:27.524 [2024-04-24 19:33:52.983178] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:27.524 [2024-04-24 19:33:52.983197] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:27.524 [2024-04-24 19:33:52.983207] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:27.524 [2024-04-24 19:33:52.983219] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:27.524 [2024-04-24 19:33:52.983228] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:27.524 [2024-04-24 19:33:52.983239] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:27.524 [2024-04-24 19:33:52.983248] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:27.524 [2024-04-24 19:33:52.983262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.524 [2024-04-24 19:33:52.983271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:27.524 [2024-04-24 19:33:52.983283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:19:27.524 [2024-04-24 19:33:52.983291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.524 [2024-04-24 19:33:52.983372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.525 [2024-04-24 19:33:52.983384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:27.525 [2024-04-24 19:33:52.983395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:27.525 [2024-04-24 19:33:52.983403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.525 [2024-04-24 19:33:52.983511] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:27.525 [2024-04-24 19:33:52.983522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:27.525 [2024-04-24 19:33:52.983535] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.525 [2024-04-24 19:33:52.983544] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.525 [2024-04-24 19:33:52.983556] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:27.525 [2024-04-24 19:33:52.983563] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:27.525 [2024-04-24 19:33:52.983573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:27.525 [2024-04-24 19:33:52.983581] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:27.525 [2024-04-24 19:33:52.983591] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:27.525 [2024-04-24 19:33:52.983598] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.525 [2024-04-24 19:33:52.983608] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:27.525 [2024-04-24 19:33:52.983616] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:27.525 [2024-04-24 19:33:52.983626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:27.525 [2024-04-24 19:33:52.983652] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:27.525 [2024-04-24 19:33:52.983663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:27.525 [2024-04-24 19:33:52.983671] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.525 [2024-04-24 19:33:52.983682] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:27.525 [2024-04-24 19:33:52.983690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:27.525 [2024-04-24 19:33:52.983701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.525 [2024-04-24 19:33:52.983709] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:27.525 [2024-04-24 19:33:52.983719] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:27.525 [2024-04-24 19:33:52.983727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:27.525 [2024-04-24 19:33:52.983737] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:27.525 [2024-04-24 19:33:52.983744] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:27.525 [2024-04-24 19:33:52.983754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:27.525 [2024-04-24 19:33:52.983761] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:27.525 [2024-04-24 19:33:52.983770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:27.525 [2024-04-24 19:33:52.983778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:27.525 [2024-04-24 19:33:52.983787] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:27.525 [2024-04-24 19:33:52.983796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:27.525 [2024-04-24 19:33:52.983808] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:27.525 [2024-04-24 19:33:52.983815] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:27.525 [2024-04-24 19:33:52.983826] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:27.525 [2024-04-24 19:33:52.983833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:27.525 [2024-04-24 19:33:52.983845] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:27.525 [2024-04-24 19:33:52.983856] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:27.525 [2024-04-24 19:33:52.983866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.525 [2024-04-24 19:33:52.983874] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:27.525 [2024-04-24 19:33:52.983884] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:27.525 [2024-04-24 19:33:52.983896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:27.525 [2024-04-24 19:33:52.983905] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:27.525 [2024-04-24 19:33:52.983914] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:27.525 [2024-04-24 19:33:52.983929] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:27.525 [2024-04-24 19:33:52.983937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:27.525 [2024-04-24 19:33:52.983948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:27.525 [2024-04-24 19:33:52.983956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:27.525 [2024-04-24 19:33:52.983966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:27.525 [2024-04-24 19:33:52.983974] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:27.525 [2024-04-24 19:33:52.983983] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:27.525 [2024-04-24 19:33:52.983994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:27.525 [2024-04-24 19:33:52.984007] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:27.525 [2024-04-24 19:33:52.984018] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.525 [2024-04-24 19:33:52.984030] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:27.525 [2024-04-24 19:33:52.984039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:27.525 [2024-04-24 19:33:52.984049] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:27.525 [2024-04-24 19:33:52.984058] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:27.525 [2024-04-24 19:33:52.984068] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:27.525 [2024-04-24 19:33:52.984077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:27.525 [2024-04-24 19:33:52.984087] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:27.525 [2024-04-24 19:33:52.984095] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:27.525 [2024-04-24 19:33:52.984107] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:27.525 [2024-04-24 19:33:52.984116] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:27.525 [2024-04-24 19:33:52.984126] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:27.525 [2024-04-24 19:33:52.984135] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:27.525 [2024-04-24 19:33:52.984145] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:27.525 [2024-04-24 19:33:52.984154] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:27.525 [2024-04-24 19:33:52.984169] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:27.525 [2024-04-24 19:33:52.984178] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:27.525 [2024-04-24 19:33:52.984189] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:27.525 [2024-04-24 19:33:52.984198] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:27.525 [2024-04-24 19:33:52.984208] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:27.525 [2024-04-24 19:33:52.984217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.525 [2024-04-24 19:33:52.984231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:27.525 [2024-04-24 19:33:52.984240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:19:27.525 [2024-04-24 19:33:52.984250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.525 [2024-04-24 19:33:53.013352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.525 [2024-04-24 19:33:53.013417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.525 [2024-04-24 19:33:53.013434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.058 ms 00:19:27.525 [2024-04-24 19:33:53.013445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.525 [2024-04-24 19:33:53.013653] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.525 [2024-04-24 19:33:53.013670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:27.525 [2024-04-24 19:33:53.013682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:27.525 [2024-04-24 19:33:53.013714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.525 [2024-04-24 19:33:53.079891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.525 [2024-04-24 19:33:53.079953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.526 [2024-04-24 19:33:53.079970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.264 ms 00:19:27.526 [2024-04-24 19:33:53.079981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.526 [2024-04-24 19:33:53.080114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.526 [2024-04-24 19:33:53.080129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.526 [2024-04-24 19:33:53.080143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:27.526 [2024-04-24 19:33:53.080153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.526 [2024-04-24 19:33:53.080626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.526 [2024-04-24 19:33:53.080672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.526 [2024-04-24 19:33:53.080684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:19:27.526 [2024-04-24 19:33:53.080695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.526 [2024-04-24 19:33:53.080818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.526 [2024-04-24 19:33:53.080830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.526 [2024-04-24 19:33:53.080840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:19:27.526 [2024-04-24 19:33:53.080855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.526 [2024-04-24 19:33:53.120623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.526 [2024-04-24 19:33:53.120692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.526 [2024-04-24 19:33:53.120712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.809 ms 00:19:27.526 [2024-04-24 19:33:53.120724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.526 [2024-04-24 19:33:53.139198] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:27.526 [2024-04-24 19:33:53.158391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.526 [2024-04-24 19:33:53.158452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:27.526 [2024-04-24 19:33:53.158471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.525 ms 00:19:27.526 [2024-04-24 19:33:53.158480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.784 [2024-04-24 19:33:53.244622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.784 [2024-04-24 19:33:53.244713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:27.784 [2024-04-24 19:33:53.244734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.179 ms 00:19:27.784 [2024-04-24 19:33:53.244744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.784 [2024-04-24 19:33:53.244919] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:19:27.784 [2024-04-24 19:33:53.244941] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:19:31.073 [2024-04-24 19:33:56.022189] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.073 [2024-04-24 19:33:56.022279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:31.073 [2024-04-24 19:33:56.022308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2782.608 ms 00:19:31.073 [2024-04-24 19:33:56.022322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.073 [2024-04-24 19:33:56.022742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.073 [2024-04-24 19:33:56.022771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:31.073 [2024-04-24 19:33:56.022790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:19:31.073 [2024-04-24 19:33:56.022801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.073 [2024-04-24 19:33:56.069584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.073 [2024-04-24 19:33:56.069680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:31.073 [2024-04-24 19:33:56.069702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.806 ms 00:19:31.073 [2024-04-24 19:33:56.069715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.073 [2024-04-24 19:33:56.115363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.073 [2024-04-24 19:33:56.115458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:31.073 [2024-04-24 19:33:56.115479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.545 ms 00:19:31.073 [2024-04-24 19:33:56.115491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.073 [2024-04-24 19:33:56.116179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.073 [2024-04-24 19:33:56.116227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:31.073 [2024-04-24 19:33:56.116259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:19:31.073 [2024-04-24 19:33:56.116270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.073 [2024-04-24 19:33:56.225968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.073 [2024-04-24 19:33:56.226057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:31.073 [2024-04-24 19:33:56.226084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.836 ms 00:19:31.073 [2024-04-24 19:33:56.226097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.073 [2024-04-24 19:33:56.273114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.073 [2024-04-24 19:33:56.273227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:31.073 [2024-04-24 19:33:56.273255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.907 ms 00:19:31.073 [2024-04-24 19:33:56.273270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.073 [2024-04-24 19:33:56.278279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.073 [2024-04-24 19:33:56.278345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:31.073 [2024-04-24 19:33:56.278363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.827 ms 00:19:31.073 [2024-04-24 19:33:56.278378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.073 [2024-04-24 19:33:56.324479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.073 [2024-04-24 19:33:56.324568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:31.073 [2024-04-24 19:33:56.324593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.074 ms 00:19:31.073 [2024-04-24 19:33:56.324607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.073 [2024-04-24 19:33:56.324861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.073 [2024-04-24 19:33:56.324892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:31.073 [2024-04-24 19:33:56.324913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:31.073 [2024-04-24 19:33:56.324927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.073 [2024-04-24 19:33:56.325055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.073 [2024-04-24 19:33:56.325072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:31.073 [2024-04-24 19:33:56.325088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:31.073 [2024-04-24 19:33:56.325101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.073 [2024-04-24 19:33:56.326552] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:31.073 [2024-04-24 19:33:56.332819] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3373.405 ms, result 0 00:19:31.073 [2024-04-24 19:33:56.333658] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:31.073 { 00:19:31.073 "name": "ftl0", 00:19:31.073 "uuid": "bc9fe5a4-cec1-4dff-a75f-c7daf97221d9" 00:19:31.073 } 00:19:31.073 19:33:56 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:31.073 19:33:56 -- common/autotest_common.sh@885 -- # local bdev_name=ftl0 00:19:31.073 19:33:56 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:31.073 19:33:56 -- common/autotest_common.sh@887 -- # local i 00:19:31.073 19:33:56 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:31.073 19:33:56 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:31.073 19:33:56 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:31.073 19:33:56 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:31.345 [ 00:19:31.345 { 00:19:31.345 "name": "ftl0", 00:19:31.345 "aliases": [ 00:19:31.345 "bc9fe5a4-cec1-4dff-a75f-c7daf97221d9" 00:19:31.345 ], 00:19:31.345 "product_name": "FTL disk", 00:19:31.345 "block_size": 4096, 00:19:31.345 "num_blocks": 23592960, 00:19:31.345 "uuid": "bc9fe5a4-cec1-4dff-a75f-c7daf97221d9", 00:19:31.345 "assigned_rate_limits": { 00:19:31.345 "rw_ios_per_sec": 0, 00:19:31.345 "rw_mbytes_per_sec": 0, 00:19:31.345 "r_mbytes_per_sec": 0, 00:19:31.345 "w_mbytes_per_sec": 0 00:19:31.345 }, 00:19:31.345 "claimed": false, 00:19:31.345 "zoned": false, 00:19:31.345 "supported_io_types": { 00:19:31.345 "read": true, 00:19:31.345 "write": true, 00:19:31.345 "unmap": true, 00:19:31.345 "write_zeroes": true, 00:19:31.345 "flush": true, 00:19:31.345 "reset": false, 00:19:31.345 "compare": false, 00:19:31.345 "compare_and_write": false, 00:19:31.345 "abort": false, 00:19:31.345 "nvme_admin": false, 00:19:31.345 "nvme_io": false 00:19:31.345 }, 00:19:31.345 "driver_specific": { 00:19:31.345 "ftl": { 00:19:31.345 "base_bdev": "3f104183-b30e-426c-9570-945ea04cfd4e", 00:19:31.345 "cache": "nvc0n1p0" 00:19:31.345 } 00:19:31.345 } 00:19:31.345 } 00:19:31.345 ] 00:19:31.345 19:33:56 -- common/autotest_common.sh@893 -- # return 0 00:19:31.345 19:33:56 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:31.345 19:33:56 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:31.627 19:33:57 -- ftl/trim.sh@56 -- # echo ']}' 00:19:31.627 19:33:57 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:31.906 19:33:57 -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:31.906 { 00:19:31.906 "name": "ftl0", 00:19:31.906 "aliases": [ 00:19:31.906 "bc9fe5a4-cec1-4dff-a75f-c7daf97221d9" 00:19:31.906 ], 00:19:31.906 "product_name": "FTL disk", 00:19:31.906 "block_size": 4096, 00:19:31.906 "num_blocks": 23592960, 00:19:31.906 "uuid": "bc9fe5a4-cec1-4dff-a75f-c7daf97221d9", 00:19:31.906 "assigned_rate_limits": { 00:19:31.906 "rw_ios_per_sec": 0, 00:19:31.906 "rw_mbytes_per_sec": 0, 00:19:31.906 "r_mbytes_per_sec": 0, 00:19:31.906 "w_mbytes_per_sec": 0 00:19:31.906 }, 00:19:31.906 "claimed": false, 00:19:31.906 "zoned": false, 00:19:31.906 "supported_io_types": { 00:19:31.906 "read": true, 00:19:31.906 "write": true, 00:19:31.906 "unmap": true, 00:19:31.906 "write_zeroes": true, 00:19:31.906 "flush": true, 00:19:31.906 "reset": false, 00:19:31.906 "compare": false, 00:19:31.906 "compare_and_write": false, 00:19:31.906 "abort": false, 00:19:31.906 "nvme_admin": false, 00:19:31.906 "nvme_io": false 00:19:31.906 }, 00:19:31.906 "driver_specific": { 00:19:31.906 "ftl": { 00:19:31.906 "base_bdev": "3f104183-b30e-426c-9570-945ea04cfd4e", 00:19:31.906 "cache": "nvc0n1p0" 00:19:31.906 } 00:19:31.906 } 00:19:31.906 } 00:19:31.906 ]' 00:19:31.906 19:33:57 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:31.906 19:33:57 -- ftl/trim.sh@60 -- # nb=23592960 00:19:31.906 19:33:57 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:32.183 [2024-04-24 19:33:57.701982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.183 [2024-04-24 19:33:57.702063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:32.183 [2024-04-24 19:33:57.702081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:32.183 [2024-04-24 19:33:57.702094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.183 [2024-04-24 19:33:57.702137] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:32.183 [2024-04-24 19:33:57.706887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.183 [2024-04-24 19:33:57.706950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:32.183 [2024-04-24 19:33:57.706967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.728 ms 00:19:32.183 [2024-04-24 19:33:57.706977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.183 [2024-04-24 19:33:57.707738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.183 [2024-04-24 19:33:57.707775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:32.183 [2024-04-24 19:33:57.707789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:19:32.183 [2024-04-24 19:33:57.707799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.183 [2024-04-24 19:33:57.711360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.183 [2024-04-24 19:33:57.711398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:32.183 [2024-04-24 19:33:57.711419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.525 ms 00:19:32.183 [2024-04-24 19:33:57.711432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.183 [2024-04-24 19:33:57.718678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.183 [2024-04-24 19:33:57.718740] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:32.183 [2024-04-24 19:33:57.718755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.152 ms 00:19:32.183 [2024-04-24 19:33:57.718764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.183 [2024-04-24 19:33:57.771686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.183 [2024-04-24 19:33:57.772232] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:32.183 [2024-04-24 19:33:57.772349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.861 ms 00:19:32.183 [2024-04-24 19:33:57.772412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.183 [2024-04-24 19:33:57.800778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.183 [2024-04-24 19:33:57.801130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:32.183 [2024-04-24 19:33:57.801251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.178 ms 00:19:32.183 [2024-04-24 19:33:57.801345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.183 [2024-04-24 19:33:57.801801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.183 [2024-04-24 19:33:57.801953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:32.183 [2024-04-24 19:33:57.802056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:19:32.183 [2024-04-24 19:33:57.802141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.456 [2024-04-24 19:33:57.853575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.456 [2024-04-24 19:33:57.854095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:32.456 [2024-04-24 19:33:57.854165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.400 ms 00:19:32.456 [2024-04-24 19:33:57.854202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.456 [2024-04-24 19:33:57.904663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.456 [2024-04-24 19:33:57.904987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:32.456 [2024-04-24 19:33:57.905121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.227 ms 00:19:32.456 [2024-04-24 19:33:57.905216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.456 [2024-04-24 19:33:57.953984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.456 [2024-04-24 19:33:57.954385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:32.456 [2024-04-24 19:33:57.954537] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.596 ms 00:19:32.456 [2024-04-24 19:33:57.954698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.456 [2024-04-24 19:33:58.005579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.456 [2024-04-24 19:33:58.005968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:32.456 [2024-04-24 19:33:58.006130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.612 ms 00:19:32.456 [2024-04-24 19:33:58.006262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.456 [2024-04-24 19:33:58.006558] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:32.456 [2024-04-24 19:33:58.006744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.006894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.007029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.007168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.007326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.007481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.007629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.007808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.007969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.008128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.008273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.008425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.008594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.008783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.008951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.009120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.009275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.009424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.009539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:32.456 [2024-04-24 19:33:58.009651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.009784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.009871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.009960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.010032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.010118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.010196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.010289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.010377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.010466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.010597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.010702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.010768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.010866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.010949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.011004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.011097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.011181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.011274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.011352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.011440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.011512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.011575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.011673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.011780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.011892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.011991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.012123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.012211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.012316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.012406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.012520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.012615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.012758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.012852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.012987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.013089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.013223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.013333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.013466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.013560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.013649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.013783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.013876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.013997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.014103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.014183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.014258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.014352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.014426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.014576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.014693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.014782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.014922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.015046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.015895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.016109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.016232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.016324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.016455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.016554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.016716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.016831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.016930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.017005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.017115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.017204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.017316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.017398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.017511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.017591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.017724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.017818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.017935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.018022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.018132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.018216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.018322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.018399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.018520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.018616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:32.457 [2024-04-24 19:33:58.018771] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:32.457 [2024-04-24 19:33:58.018869] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc9fe5a4-cec1-4dff-a75f-c7daf97221d9 00:19:32.457 [2024-04-24 19:33:58.018949] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:32.457 [2024-04-24 19:33:58.019081] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:32.457 [2024-04-24 19:33:58.019177] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:32.457 [2024-04-24 19:33:58.019289] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:32.457 [2024-04-24 19:33:58.019364] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:32.457 [2024-04-24 19:33:58.019447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:32.457 [2024-04-24 19:33:58.019466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:32.457 [2024-04-24 19:33:58.019481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:32.457 [2024-04-24 19:33:58.019493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:32.458 [2024-04-24 19:33:58.019518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.458 [2024-04-24 19:33:58.019533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:32.458 [2024-04-24 19:33:58.019555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.989 ms 00:19:32.458 [2024-04-24 19:33:58.019568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.458 [2024-04-24 19:33:58.045218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.458 [2024-04-24 19:33:58.045297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:32.458 [2024-04-24 19:33:58.045322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.518 ms 00:19:32.458 [2024-04-24 19:33:58.045339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.458 [2024-04-24 19:33:58.045804] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:32.458 [2024-04-24 19:33:58.045835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:32.458 [2024-04-24 19:33:58.045856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:19:32.458 [2024-04-24 19:33:58.045872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.727 [2024-04-24 19:33:58.135724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.727 [2024-04-24 19:33:58.135799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:32.727 [2024-04-24 19:33:58.135822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.727 [2024-04-24 19:33:58.135832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.727 [2024-04-24 19:33:58.135993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.727 [2024-04-24 19:33:58.136006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:32.727 [2024-04-24 19:33:58.136017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.727 [2024-04-24 19:33:58.136026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.727 [2024-04-24 19:33:58.136125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.727 [2024-04-24 19:33:58.136138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:32.727 [2024-04-24 19:33:58.136150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.727 [2024-04-24 19:33:58.136159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.727 [2024-04-24 19:33:58.136202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.727 [2024-04-24 19:33:58.136211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:32.727 [2024-04-24 19:33:58.136223] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.727 [2024-04-24 19:33:58.136232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.727 [2024-04-24 19:33:58.306596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.727 [2024-04-24 19:33:58.306678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:32.727 [2024-04-24 19:33:58.306697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.727 [2024-04-24 19:33:58.306710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.727 [2024-04-24 19:33:58.364662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.727 [2024-04-24 19:33:58.364735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:32.727 [2024-04-24 19:33:58.364752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.727 [2024-04-24 19:33:58.364762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.727 [2024-04-24 19:33:58.364881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.727 [2024-04-24 19:33:58.364892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:32.727 [2024-04-24 19:33:58.364905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.727 [2024-04-24 19:33:58.364915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.727 [2024-04-24 19:33:58.364982] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.727 [2024-04-24 19:33:58.364992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:32.727 [2024-04-24 19:33:58.365003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.727 [2024-04-24 19:33:58.365012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.727 [2024-04-24 19:33:58.365151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.727 [2024-04-24 19:33:58.365164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:32.727 [2024-04-24 19:33:58.365194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.727 [2024-04-24 19:33:58.365204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.727 [2024-04-24 19:33:58.365285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.727 [2024-04-24 19:33:58.365300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:32.727 [2024-04-24 19:33:58.365311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.727 [2024-04-24 19:33:58.365320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.727 [2024-04-24 19:33:58.365382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.727 [2024-04-24 19:33:58.365391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:32.727 [2024-04-24 19:33:58.365403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.727 [2024-04-24 19:33:58.365412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.727 [2024-04-24 19:33:58.365480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:32.727 [2024-04-24 19:33:58.365491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:32.727 [2024-04-24 19:33:58.365502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:32.727 [2024-04-24 19:33:58.365511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:32.727 [2024-04-24 19:33:58.365772] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 665.031 ms, result 0 00:19:32.727 true 00:19:32.727 19:33:58 -- ftl/trim.sh@63 -- # killprocess 79106 00:19:32.727 19:33:58 -- common/autotest_common.sh@936 -- # '[' -z 79106 ']' 00:19:32.727 19:33:58 -- common/autotest_common.sh@940 -- # kill -0 79106 00:19:32.727 19:33:58 -- common/autotest_common.sh@941 -- # uname 00:19:32.999 19:33:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:32.999 19:33:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79106 00:19:32.999 killing process with pid 79106 00:19:32.999 19:33:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:32.999 19:33:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:32.999 19:33:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79106' 00:19:32.999 19:33:58 -- common/autotest_common.sh@955 -- # kill 79106 00:19:32.999 19:33:58 -- common/autotest_common.sh@960 -- # wait 79106 00:19:41.115 19:34:05 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:41.374 65536+0 records in 00:19:41.374 65536+0 records out 00:19:41.374 268435456 bytes (268 MB, 256 MiB) copied, 1.12823 s, 238 MB/s 00:19:41.374 19:34:07 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:41.633 [2024-04-24 19:34:07.140577] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:19:41.633 [2024-04-24 19:34:07.140792] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79390 ] 00:19:41.890 [2024-04-24 19:34:07.312251] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.148 [2024-04-24 19:34:07.650166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.714 [2024-04-24 19:34:08.135208] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:42.714 [2024-04-24 19:34:08.135335] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:42.714 [2024-04-24 19:34:08.303166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.714 [2024-04-24 19:34:08.303245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:42.714 [2024-04-24 19:34:08.303261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:42.714 [2024-04-24 19:34:08.303274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.714 [2024-04-24 19:34:08.307058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.714 [2024-04-24 19:34:08.307115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:42.715 [2024-04-24 19:34:08.307129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.766 ms 00:19:42.715 [2024-04-24 19:34:08.307138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.715 [2024-04-24 19:34:08.307300] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:42.715 [2024-04-24 19:34:08.308755] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:42.715 [2024-04-24 19:34:08.308797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.715 [2024-04-24 19:34:08.308807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:42.715 [2024-04-24 19:34:08.308821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.510 ms 00:19:42.715 [2024-04-24 19:34:08.308830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.715 [2024-04-24 19:34:08.310419] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:42.715 [2024-04-24 19:34:08.336143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.715 [2024-04-24 19:34:08.336225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:42.715 [2024-04-24 19:34:08.336242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.770 ms 00:19:42.715 [2024-04-24 19:34:08.336252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.715 [2024-04-24 19:34:08.336451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.715 [2024-04-24 19:34:08.336465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:42.715 [2024-04-24 19:34:08.336476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:42.715 [2024-04-24 19:34:08.336489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.715 [2024-04-24 19:34:08.344676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.715 [2024-04-24 19:34:08.344730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:42.715 [2024-04-24 19:34:08.344744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.146 ms 00:19:42.715 [2024-04-24 19:34:08.344752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.715 [2024-04-24 19:34:08.344906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.715 [2024-04-24 19:34:08.344923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:42.715 [2024-04-24 19:34:08.344936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:42.715 [2024-04-24 19:34:08.344945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.715 [2024-04-24 19:34:08.344983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.715 [2024-04-24 19:34:08.344993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:42.715 [2024-04-24 19:34:08.345003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:42.715 [2024-04-24 19:34:08.345011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.715 [2024-04-24 19:34:08.345041] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:42.715 [2024-04-24 19:34:08.352182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.715 [2024-04-24 19:34:08.352242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:42.715 [2024-04-24 19:34:08.352255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.164 ms 00:19:42.715 [2024-04-24 19:34:08.352264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.715 [2024-04-24 19:34:08.352370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.715 [2024-04-24 19:34:08.352404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:42.715 [2024-04-24 19:34:08.352417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:42.715 [2024-04-24 19:34:08.352426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.715 [2024-04-24 19:34:08.352459] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:42.715 [2024-04-24 19:34:08.352482] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:42.715 [2024-04-24 19:34:08.352520] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:42.715 [2024-04-24 19:34:08.352538] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:42.715 [2024-04-24 19:34:08.352618] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:42.715 [2024-04-24 19:34:08.352630] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:42.715 [2024-04-24 19:34:08.352663] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:42.715 [2024-04-24 19:34:08.352674] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:42.715 [2024-04-24 19:34:08.352685] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:42.715 [2024-04-24 19:34:08.352694] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:42.715 [2024-04-24 19:34:08.352703] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:42.715 [2024-04-24 19:34:08.352711] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:42.715 [2024-04-24 19:34:08.352719] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:42.715 [2024-04-24 19:34:08.352728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.715 [2024-04-24 19:34:08.352737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:42.715 [2024-04-24 19:34:08.352749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:19:42.715 [2024-04-24 19:34:08.352761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.715 [2024-04-24 19:34:08.352830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.715 [2024-04-24 19:34:08.352840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:42.715 [2024-04-24 19:34:08.352849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:42.715 [2024-04-24 19:34:08.352857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.715 [2024-04-24 19:34:08.352939] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:42.715 [2024-04-24 19:34:08.352956] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:42.715 [2024-04-24 19:34:08.352965] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:42.715 [2024-04-24 19:34:08.352978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.715 [2024-04-24 19:34:08.352986] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:42.715 [2024-04-24 19:34:08.352994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:42.715 [2024-04-24 19:34:08.353003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:42.715 [2024-04-24 19:34:08.353010] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:42.715 [2024-04-24 19:34:08.353019] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:42.715 [2024-04-24 19:34:08.353028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:42.715 [2024-04-24 19:34:08.353048] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:42.715 [2024-04-24 19:34:08.353057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:42.715 [2024-04-24 19:34:08.353064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:42.715 [2024-04-24 19:34:08.353073] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:42.715 [2024-04-24 19:34:08.353081] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:42.715 [2024-04-24 19:34:08.353088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.715 [2024-04-24 19:34:08.353095] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:42.715 [2024-04-24 19:34:08.353104] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:42.715 [2024-04-24 19:34:08.353111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.715 [2024-04-24 19:34:08.353119] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:42.715 [2024-04-24 19:34:08.353127] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:42.715 [2024-04-24 19:34:08.353135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:42.715 [2024-04-24 19:34:08.353142] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:42.715 [2024-04-24 19:34:08.353150] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:42.715 [2024-04-24 19:34:08.353159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:42.715 [2024-04-24 19:34:08.353167] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:42.715 [2024-04-24 19:34:08.353174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:42.715 [2024-04-24 19:34:08.353182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:42.715 [2024-04-24 19:34:08.353189] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:42.715 [2024-04-24 19:34:08.353197] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:42.715 [2024-04-24 19:34:08.353204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:42.715 [2024-04-24 19:34:08.353211] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:42.715 [2024-04-24 19:34:08.353218] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:42.715 [2024-04-24 19:34:08.353226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:42.715 [2024-04-24 19:34:08.353234] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:42.715 [2024-04-24 19:34:08.353241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:42.715 [2024-04-24 19:34:08.353248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:42.715 [2024-04-24 19:34:08.353257] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:42.715 [2024-04-24 19:34:08.353265] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:42.715 [2024-04-24 19:34:08.353272] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:42.715 [2024-04-24 19:34:08.353279] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:42.715 [2024-04-24 19:34:08.353288] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:42.715 [2024-04-24 19:34:08.353296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:42.715 [2024-04-24 19:34:08.353304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:42.715 [2024-04-24 19:34:08.353312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:42.715 [2024-04-24 19:34:08.353320] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:42.715 [2024-04-24 19:34:08.353328] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:42.716 [2024-04-24 19:34:08.353336] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:42.716 [2024-04-24 19:34:08.353343] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:42.716 [2024-04-24 19:34:08.353351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:42.716 [2024-04-24 19:34:08.353360] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:42.716 [2024-04-24 19:34:08.353371] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:42.716 [2024-04-24 19:34:08.353380] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:42.716 [2024-04-24 19:34:08.353389] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:42.716 [2024-04-24 19:34:08.353398] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:42.716 [2024-04-24 19:34:08.353407] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:42.716 [2024-04-24 19:34:08.353416] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:42.716 [2024-04-24 19:34:08.353426] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:42.716 [2024-04-24 19:34:08.353434] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:42.716 [2024-04-24 19:34:08.353442] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:42.716 [2024-04-24 19:34:08.353451] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:42.716 [2024-04-24 19:34:08.353459] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:42.716 [2024-04-24 19:34:08.353467] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:42.716 [2024-04-24 19:34:08.353476] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:42.716 [2024-04-24 19:34:08.353484] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:42.716 [2024-04-24 19:34:08.353492] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:42.716 [2024-04-24 19:34:08.353502] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:42.716 [2024-04-24 19:34:08.353511] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:42.716 [2024-04-24 19:34:08.353519] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:42.716 [2024-04-24 19:34:08.353528] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:42.716 [2024-04-24 19:34:08.353536] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:42.716 [2024-04-24 19:34:08.353545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.716 [2024-04-24 19:34:08.353554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:42.716 [2024-04-24 19:34:08.353569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:19:42.716 [2024-04-24 19:34:08.353577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.716 [2024-04-24 19:34:08.384209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.716 [2024-04-24 19:34:08.384280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:42.716 [2024-04-24 19:34:08.384296] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.622 ms 00:19:42.716 [2024-04-24 19:34:08.384306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.716 [2024-04-24 19:34:08.384484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.716 [2024-04-24 19:34:08.384496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:42.716 [2024-04-24 19:34:08.384506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:42.716 [2024-04-24 19:34:08.384514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.974 [2024-04-24 19:34:08.471927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.974 [2024-04-24 19:34:08.471995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:42.974 [2024-04-24 19:34:08.472013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.553 ms 00:19:42.974 [2024-04-24 19:34:08.472022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.974 [2024-04-24 19:34:08.472152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.974 [2024-04-24 19:34:08.472164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:42.974 [2024-04-24 19:34:08.472175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:42.974 [2024-04-24 19:34:08.472184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.974 [2024-04-24 19:34:08.472683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.974 [2024-04-24 19:34:08.472708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:42.974 [2024-04-24 19:34:08.472723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:19:42.974 [2024-04-24 19:34:08.472731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.974 [2024-04-24 19:34:08.472867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.974 [2024-04-24 19:34:08.472887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:42.974 [2024-04-24 19:34:08.472898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:19:42.974 [2024-04-24 19:34:08.472907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.974 [2024-04-24 19:34:08.500796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.974 [2024-04-24 19:34:08.500861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:42.974 [2024-04-24 19:34:08.500877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.912 ms 00:19:42.974 [2024-04-24 19:34:08.500887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.974 [2024-04-24 19:34:08.525676] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:42.974 [2024-04-24 19:34:08.525760] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:42.974 [2024-04-24 19:34:08.525778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.975 [2024-04-24 19:34:08.525788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:42.975 [2024-04-24 19:34:08.525802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.758 ms 00:19:42.975 [2024-04-24 19:34:08.525811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.975 [2024-04-24 19:34:08.565321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.975 [2024-04-24 19:34:08.565442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:42.975 [2024-04-24 19:34:08.565460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.402 ms 00:19:42.975 [2024-04-24 19:34:08.565469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.975 [2024-04-24 19:34:08.590411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.975 [2024-04-24 19:34:08.590492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:42.975 [2024-04-24 19:34:08.590508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.830 ms 00:19:42.975 [2024-04-24 19:34:08.590517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.975 [2024-04-24 19:34:08.615315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.975 [2024-04-24 19:34:08.615401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:42.975 [2024-04-24 19:34:08.615416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.640 ms 00:19:42.975 [2024-04-24 19:34:08.615425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.975 [2024-04-24 19:34:08.616117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.975 [2024-04-24 19:34:08.616155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:42.975 [2024-04-24 19:34:08.616168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:19:42.975 [2024-04-24 19:34:08.616177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.233 [2024-04-24 19:34:08.726632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.233 [2024-04-24 19:34:08.726731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:43.233 [2024-04-24 19:34:08.726748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.624 ms 00:19:43.233 [2024-04-24 19:34:08.726757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.233 [2024-04-24 19:34:08.745005] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:43.233 [2024-04-24 19:34:08.764234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.233 [2024-04-24 19:34:08.764306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:43.233 [2024-04-24 19:34:08.764329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.368 ms 00:19:43.233 [2024-04-24 19:34:08.764338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.233 [2024-04-24 19:34:08.764463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.233 [2024-04-24 19:34:08.764475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:43.233 [2024-04-24 19:34:08.764485] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:43.233 [2024-04-24 19:34:08.764494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.233 [2024-04-24 19:34:08.764554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.233 [2024-04-24 19:34:08.764565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:43.233 [2024-04-24 19:34:08.764574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:43.233 [2024-04-24 19:34:08.764586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.233 [2024-04-24 19:34:08.766498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.233 [2024-04-24 19:34:08.766535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:43.233 [2024-04-24 19:34:08.766547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.894 ms 00:19:43.233 [2024-04-24 19:34:08.766557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.233 [2024-04-24 19:34:08.766597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.233 [2024-04-24 19:34:08.766607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:43.233 [2024-04-24 19:34:08.766615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:43.233 [2024-04-24 19:34:08.766624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.233 [2024-04-24 19:34:08.766677] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:43.233 [2024-04-24 19:34:08.766692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.233 [2024-04-24 19:34:08.766701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:43.233 [2024-04-24 19:34:08.766709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:43.233 [2024-04-24 19:34:08.766718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.233 [2024-04-24 19:34:08.816068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.233 [2024-04-24 19:34:08.816155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:43.233 [2024-04-24 19:34:08.816172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.412 ms 00:19:43.233 [2024-04-24 19:34:08.816195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.233 [2024-04-24 19:34:08.816404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.233 [2024-04-24 19:34:08.816417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:43.233 [2024-04-24 19:34:08.816427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:43.233 [2024-04-24 19:34:08.816436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.233 [2024-04-24 19:34:08.817600] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:43.233 [2024-04-24 19:34:08.824780] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 515.084 ms, result 0 00:19:43.233 [2024-04-24 19:34:08.825587] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:43.233 [2024-04-24 19:34:08.848768] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:51.289  Copying: 36/256 [MB] (36 MBps) Copying: 68/256 [MB] (32 MBps) Copying: 102/256 [MB] (33 MBps) Copying: 134/256 [MB] (32 MBps) Copying: 166/256 [MB] (32 MBps) Copying: 199/256 [MB] (32 MBps) Copying: 231/256 [MB] (32 MBps) Copying: 256/256 [MB] (average 32 MBps)[2024-04-24 19:34:16.618362] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:51.289 [2024-04-24 19:34:16.636084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-04-24 19:34:16.636156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:51.289 [2024-04-24 19:34:16.636178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:51.289 [2024-04-24 19:34:16.636190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-04-24 19:34:16.636228] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:51.289 [2024-04-24 19:34:16.640584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-04-24 19:34:16.640656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:51.289 [2024-04-24 19:34:16.640676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.339 ms 00:19:51.289 [2024-04-24 19:34:16.640701] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-04-24 19:34:16.642919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-04-24 19:34:16.642967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:51.289 [2024-04-24 19:34:16.642985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.143 ms 00:19:51.289 [2024-04-24 19:34:16.642997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-04-24 19:34:16.650319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-04-24 19:34:16.650371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:51.289 [2024-04-24 19:34:16.650400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.304 ms 00:19:51.289 [2024-04-24 19:34:16.650412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-04-24 19:34:16.657247] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-04-24 19:34:16.657298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:51.289 [2024-04-24 19:34:16.657314] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.778 ms 00:19:51.289 [2024-04-24 19:34:16.657325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-04-24 19:34:16.703698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-04-24 19:34:16.703768] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:51.289 [2024-04-24 19:34:16.703788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.365 ms 00:19:51.289 [2024-04-24 19:34:16.703799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-04-24 19:34:16.730703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-04-24 19:34:16.730777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:51.289 [2024-04-24 19:34:16.730800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.826 ms 00:19:51.289 [2024-04-24 19:34:16.730811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-04-24 19:34:16.731021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-04-24 19:34:16.731042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:51.289 [2024-04-24 19:34:16.731082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:51.289 [2024-04-24 19:34:16.731097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-04-24 19:34:16.778684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-04-24 19:34:16.778757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:51.289 [2024-04-24 19:34:16.778776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.649 ms 00:19:51.289 [2024-04-24 19:34:16.778788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-04-24 19:34:16.824616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-04-24 19:34:16.824700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:51.289 [2024-04-24 19:34:16.824722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.774 ms 00:19:51.289 [2024-04-24 19:34:16.824734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-04-24 19:34:16.870594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-04-24 19:34:16.870792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:51.289 [2024-04-24 19:34:16.870819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.791 ms 00:19:51.289 [2024-04-24 19:34:16.870830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-04-24 19:34:16.917299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.289 [2024-04-24 19:34:16.917389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:51.289 [2024-04-24 19:34:16.917408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.374 ms 00:19:51.289 [2024-04-24 19:34:16.917419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.289 [2024-04-24 19:34:16.917546] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:51.289 [2024-04-24 19:34:16.917570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-04-24 19:34:16.917585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-04-24 19:34:16.917598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-04-24 19:34:16.917610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-04-24 19:34:16.917622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-04-24 19:34:16.917665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-04-24 19:34:16.917680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-04-24 19:34:16.917693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-04-24 19:34:16.917703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-04-24 19:34:16.917712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-04-24 19:34:16.917722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-04-24 19:34:16.917730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-04-24 19:34:16.917739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:51.289 [2024-04-24 19:34:16.917749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.917992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:51.290 [2024-04-24 19:34:16.918460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-04-24 19:34:16.918494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-04-24 19:34:16.918503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-04-24 19:34:16.918512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-04-24 19:34:16.918522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-04-24 19:34:16.918531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-04-24 19:34:16.918540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-04-24 19:34:16.918549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:51.291 [2024-04-24 19:34:16.918566] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:51.291 [2024-04-24 19:34:16.918576] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc9fe5a4-cec1-4dff-a75f-c7daf97221d9 00:19:51.291 [2024-04-24 19:34:16.918590] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:51.291 [2024-04-24 19:34:16.918598] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:51.291 [2024-04-24 19:34:16.918607] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:51.291 [2024-04-24 19:34:16.918615] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:51.291 [2024-04-24 19:34:16.918625] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:51.291 [2024-04-24 19:34:16.918649] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:51.291 [2024-04-24 19:34:16.918659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:51.291 [2024-04-24 19:34:16.918667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:51.291 [2024-04-24 19:34:16.918674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:51.291 [2024-04-24 19:34:16.918685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.291 [2024-04-24 19:34:16.918694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:51.291 [2024-04-24 19:34:16.918704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.143 ms 00:19:51.291 [2024-04-24 19:34:16.918714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.291 [2024-04-24 19:34:16.942453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.291 [2024-04-24 19:34:16.942512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:51.291 [2024-04-24 19:34:16.942524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.748 ms 00:19:51.291 [2024-04-24 19:34:16.942532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.291 [2024-04-24 19:34:16.942892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.291 [2024-04-24 19:34:16.942904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:51.291 [2024-04-24 19:34:16.942913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:19:51.291 [2024-04-24 19:34:16.942922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.549 [2024-04-24 19:34:17.007839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.549 [2024-04-24 19:34:17.007903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:51.549 [2024-04-24 19:34:17.007918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.549 [2024-04-24 19:34:17.007927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.549 [2024-04-24 19:34:17.008029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.549 [2024-04-24 19:34:17.008040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:51.549 [2024-04-24 19:34:17.008049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.549 [2024-04-24 19:34:17.008058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.549 [2024-04-24 19:34:17.008123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.549 [2024-04-24 19:34:17.008136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:51.549 [2024-04-24 19:34:17.008145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.549 [2024-04-24 19:34:17.008154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.549 [2024-04-24 19:34:17.008174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.549 [2024-04-24 19:34:17.008183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:51.549 [2024-04-24 19:34:17.008192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.549 [2024-04-24 19:34:17.008201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.549 [2024-04-24 19:34:17.133454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.549 [2024-04-24 19:34:17.133514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:51.549 [2024-04-24 19:34:17.133528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.549 [2024-04-24 19:34:17.133552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.549 [2024-04-24 19:34:17.185919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.549 [2024-04-24 19:34:17.185982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:51.549 [2024-04-24 19:34:17.185995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.549 [2024-04-24 19:34:17.186003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.549 [2024-04-24 19:34:17.186075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.549 [2024-04-24 19:34:17.186084] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:51.549 [2024-04-24 19:34:17.186092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.549 [2024-04-24 19:34:17.186099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.549 [2024-04-24 19:34:17.186126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.549 [2024-04-24 19:34:17.186134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:51.549 [2024-04-24 19:34:17.186141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.549 [2024-04-24 19:34:17.186148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.549 [2024-04-24 19:34:17.186252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.549 [2024-04-24 19:34:17.186267] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:51.549 [2024-04-24 19:34:17.186274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.550 [2024-04-24 19:34:17.186281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.550 [2024-04-24 19:34:17.186316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.550 [2024-04-24 19:34:17.186326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:51.550 [2024-04-24 19:34:17.186333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.550 [2024-04-24 19:34:17.186340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.550 [2024-04-24 19:34:17.186383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.550 [2024-04-24 19:34:17.186394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:51.550 [2024-04-24 19:34:17.186401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.550 [2024-04-24 19:34:17.186408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.550 [2024-04-24 19:34:17.186453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:51.550 [2024-04-24 19:34:17.186462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:51.550 [2024-04-24 19:34:17.186470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:51.550 [2024-04-24 19:34:17.186477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.550 [2024-04-24 19:34:17.186612] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 551.605 ms, result 0 00:19:53.452 00:19:53.452 00:19:53.452 19:34:19 -- ftl/trim.sh@72 -- # svcpid=79516 00:19:53.452 19:34:19 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:53.452 19:34:19 -- ftl/trim.sh@73 -- # waitforlisten 79516 00:19:53.452 19:34:19 -- common/autotest_common.sh@817 -- # '[' -z 79516 ']' 00:19:53.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.452 19:34:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.452 19:34:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:53.452 19:34:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.452 19:34:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:53.452 19:34:19 -- common/autotest_common.sh@10 -- # set +x 00:19:53.713 [2024-04-24 19:34:19.174071] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:19:53.713 [2024-04-24 19:34:19.174216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79516 ] 00:19:53.713 [2024-04-24 19:34:19.343581] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.972 [2024-04-24 19:34:19.604597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.353 19:34:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:55.354 19:34:20 -- common/autotest_common.sh@850 -- # return 0 00:19:55.354 19:34:20 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:55.354 [2024-04-24 19:34:20.816874] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.354 [2024-04-24 19:34:20.816947] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.354 [2024-04-24 19:34:20.984277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.354 [2024-04-24 19:34:20.984336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:55.354 [2024-04-24 19:34:20.984354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:55.354 [2024-04-24 19:34:20.984363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.354 [2024-04-24 19:34:20.987338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.354 [2024-04-24 19:34:20.987388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.354 [2024-04-24 19:34:20.987400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.960 ms 00:19:55.354 [2024-04-24 19:34:20.987411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.354 [2024-04-24 19:34:20.987509] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:55.354 [2024-04-24 19:34:20.988725] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:55.354 [2024-04-24 19:34:20.988759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.354 [2024-04-24 19:34:20.988769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.354 [2024-04-24 19:34:20.988779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.266 ms 00:19:55.354 [2024-04-24 19:34:20.988786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.354 [2024-04-24 19:34:20.990190] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:55.354 [2024-04-24 19:34:21.011131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.354 [2024-04-24 19:34:21.011194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:55.354 [2024-04-24 19:34:21.011209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.985 ms 00:19:55.354 [2024-04-24 19:34:21.011219] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.354 [2024-04-24 19:34:21.011340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.354 [2024-04-24 19:34:21.011379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:55.354 [2024-04-24 19:34:21.011389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:55.354 [2024-04-24 19:34:21.011399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.354 [2024-04-24 19:34:21.018418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.354 [2024-04-24 19:34:21.018456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.354 [2024-04-24 19:34:21.018468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.985 ms 00:19:55.354 [2024-04-24 19:34:21.018481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.354 [2024-04-24 19:34:21.018585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.354 [2024-04-24 19:34:21.018603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.354 [2024-04-24 19:34:21.018612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:55.354 [2024-04-24 19:34:21.018622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.354 [2024-04-24 19:34:21.018674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.354 [2024-04-24 19:34:21.018686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:55.354 [2024-04-24 19:34:21.018694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:55.354 [2024-04-24 19:34:21.018703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.354 [2024-04-24 19:34:21.018731] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:55.354 [2024-04-24 19:34:21.025021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.354 [2024-04-24 19:34:21.025058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.354 [2024-04-24 19:34:21.025069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.309 ms 00:19:55.354 [2024-04-24 19:34:21.025077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.354 [2024-04-24 19:34:21.025150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.354 [2024-04-24 19:34:21.025161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:55.354 [2024-04-24 19:34:21.025173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:55.354 [2024-04-24 19:34:21.025181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.354 [2024-04-24 19:34:21.025204] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:55.354 [2024-04-24 19:34:21.025224] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:55.354 [2024-04-24 19:34:21.025257] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:55.354 [2024-04-24 19:34:21.025286] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:55.354 [2024-04-24 19:34:21.025358] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:55.354 [2024-04-24 19:34:21.025369] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:55.354 [2024-04-24 19:34:21.025380] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:55.354 [2024-04-24 19:34:21.025389] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:55.354 [2024-04-24 19:34:21.025400] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:55.354 [2024-04-24 19:34:21.025408] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:55.354 [2024-04-24 19:34:21.025419] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:55.354 [2024-04-24 19:34:21.025426] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:55.354 [2024-04-24 19:34:21.025435] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:55.354 [2024-04-24 19:34:21.025444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.354 [2024-04-24 19:34:21.025458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:55.354 [2024-04-24 19:34:21.025466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:19:55.354 [2024-04-24 19:34:21.025475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.354 [2024-04-24 19:34:21.025535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.354 [2024-04-24 19:34:21.025547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:55.354 [2024-04-24 19:34:21.025555] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:55.354 [2024-04-24 19:34:21.025564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.354 [2024-04-24 19:34:21.025649] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:55.354 [2024-04-24 19:34:21.025662] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:55.354 [2024-04-24 19:34:21.025673] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.354 [2024-04-24 19:34:21.025687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.354 [2024-04-24 19:34:21.025695] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:55.354 [2024-04-24 19:34:21.025704] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:55.354 [2024-04-24 19:34:21.025711] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:55.354 [2024-04-24 19:34:21.025720] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:55.354 [2024-04-24 19:34:21.025728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:55.354 [2024-04-24 19:34:21.025738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.354 [2024-04-24 19:34:21.025744] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:55.354 [2024-04-24 19:34:21.025753] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:55.354 [2024-04-24 19:34:21.025760] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.354 [2024-04-24 19:34:21.025769] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:55.354 [2024-04-24 19:34:21.025776] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:55.354 [2024-04-24 19:34:21.025784] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.354 [2024-04-24 19:34:21.025791] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:55.354 [2024-04-24 19:34:21.025799] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:55.354 [2024-04-24 19:34:21.025806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.354 [2024-04-24 19:34:21.025825] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:55.354 [2024-04-24 19:34:21.025832] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:55.354 [2024-04-24 19:34:21.025841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:55.354 [2024-04-24 19:34:21.025847] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:55.354 [2024-04-24 19:34:21.025856] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:55.354 [2024-04-24 19:34:21.025862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:55.354 [2024-04-24 19:34:21.025873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:55.354 [2024-04-24 19:34:21.025880] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:55.354 [2024-04-24 19:34:21.025889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:55.354 [2024-04-24 19:34:21.025896] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:55.354 [2024-04-24 19:34:21.025903] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:55.354 [2024-04-24 19:34:21.025914] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:55.354 [2024-04-24 19:34:21.025922] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:55.354 [2024-04-24 19:34:21.025929] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:55.354 [2024-04-24 19:34:21.025937] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:55.354 [2024-04-24 19:34:21.025944] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:55.354 [2024-04-24 19:34:21.025952] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:55.354 [2024-04-24 19:34:21.025958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.355 [2024-04-24 19:34:21.025966] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:55.355 [2024-04-24 19:34:21.025973] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:55.355 [2024-04-24 19:34:21.025980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.355 [2024-04-24 19:34:21.025986] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:55.355 [2024-04-24 19:34:21.025998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:55.355 [2024-04-24 19:34:21.026004] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.355 [2024-04-24 19:34:21.026013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.355 [2024-04-24 19:34:21.026020] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:55.355 [2024-04-24 19:34:21.026028] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:55.355 [2024-04-24 19:34:21.026035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:55.355 [2024-04-24 19:34:21.026043] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:55.355 [2024-04-24 19:34:21.026049] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:55.355 [2024-04-24 19:34:21.026058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:55.355 [2024-04-24 19:34:21.026066] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:55.355 [2024-04-24 19:34:21.026076] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.355 [2024-04-24 19:34:21.026085] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:55.355 [2024-04-24 19:34:21.026094] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:55.355 [2024-04-24 19:34:21.026101] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:55.355 [2024-04-24 19:34:21.026111] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:55.355 [2024-04-24 19:34:21.026119] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:55.355 [2024-04-24 19:34:21.026129] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:55.355 [2024-04-24 19:34:21.026135] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:55.355 [2024-04-24 19:34:21.026143] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:55.355 [2024-04-24 19:34:21.026150] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:55.355 [2024-04-24 19:34:21.026159] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:55.355 [2024-04-24 19:34:21.026166] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:55.355 [2024-04-24 19:34:21.026176] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:55.355 [2024-04-24 19:34:21.026183] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:55.355 [2024-04-24 19:34:21.026192] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:55.355 [2024-04-24 19:34:21.026199] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.355 [2024-04-24 19:34:21.026211] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:55.355 [2024-04-24 19:34:21.026219] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:55.355 [2024-04-24 19:34:21.026227] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:55.355 [2024-04-24 19:34:21.026234] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:55.355 [2024-04-24 19:34:21.026243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.355 [2024-04-24 19:34:21.026255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:55.355 [2024-04-24 19:34:21.026267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:19:55.355 [2024-04-24 19:34:21.026274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.636 [2024-04-24 19:34:21.053784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.636 [2024-04-24 19:34:21.053833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:55.636 [2024-04-24 19:34:21.053848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.510 ms 00:19:55.636 [2024-04-24 19:34:21.053857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.636 [2024-04-24 19:34:21.054007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.636 [2024-04-24 19:34:21.054018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:55.636 [2024-04-24 19:34:21.054028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:19:55.636 [2024-04-24 19:34:21.054035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.636 [2024-04-24 19:34:21.109924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.636 [2024-04-24 19:34:21.109978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:55.636 [2024-04-24 19:34:21.109995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.973 ms 00:19:55.636 [2024-04-24 19:34:21.110003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.636 [2024-04-24 19:34:21.110123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.636 [2024-04-24 19:34:21.110134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:55.636 [2024-04-24 19:34:21.110160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:55.636 [2024-04-24 19:34:21.110168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.636 [2024-04-24 19:34:21.110604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.636 [2024-04-24 19:34:21.110627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:55.636 [2024-04-24 19:34:21.110655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:19:55.636 [2024-04-24 19:34:21.110665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.636 [2024-04-24 19:34:21.110778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.636 [2024-04-24 19:34:21.110799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:55.636 [2024-04-24 19:34:21.110810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:55.636 [2024-04-24 19:34:21.110818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.636 [2024-04-24 19:34:21.135393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.636 [2024-04-24 19:34:21.135451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:55.636 [2024-04-24 19:34:21.135468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.594 ms 00:19:55.636 [2024-04-24 19:34:21.135477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.636 [2024-04-24 19:34:21.156800] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:55.636 [2024-04-24 19:34:21.156860] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:55.636 [2024-04-24 19:34:21.156903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.636 [2024-04-24 19:34:21.156917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:55.636 [2024-04-24 19:34:21.156935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.279 ms 00:19:55.636 [2024-04-24 19:34:21.156947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.636 [2024-04-24 19:34:21.191902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.636 [2024-04-24 19:34:21.191981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:55.636 [2024-04-24 19:34:21.192000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.859 ms 00:19:55.636 [2024-04-24 19:34:21.192009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.636 [2024-04-24 19:34:21.213974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.636 [2024-04-24 19:34:21.214033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:55.636 [2024-04-24 19:34:21.214048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.876 ms 00:19:55.636 [2024-04-24 19:34:21.214056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.636 [2024-04-24 19:34:21.235173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.636 [2024-04-24 19:34:21.235238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:55.636 [2024-04-24 19:34:21.235254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.034 ms 00:19:55.636 [2024-04-24 19:34:21.235264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.636 [2024-04-24 19:34:21.235836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.636 [2024-04-24 19:34:21.235856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:55.636 [2024-04-24 19:34:21.235869] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:19:55.636 [2024-04-24 19:34:21.235882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.896 [2024-04-24 19:34:21.333771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.896 [2024-04-24 19:34:21.333835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:55.896 [2024-04-24 19:34:21.333854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.030 ms 00:19:55.896 [2024-04-24 19:34:21.333864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.896 [2024-04-24 19:34:21.350310] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:55.896 [2024-04-24 19:34:21.368697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.896 [2024-04-24 19:34:21.368756] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:55.896 [2024-04-24 19:34:21.368772] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.737 ms 00:19:55.896 [2024-04-24 19:34:21.368787] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.896 [2024-04-24 19:34:21.368902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.896 [2024-04-24 19:34:21.368917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:55.896 [2024-04-24 19:34:21.368927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:55.896 [2024-04-24 19:34:21.368942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.896 [2024-04-24 19:34:21.368999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.896 [2024-04-24 19:34:21.369011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:55.896 [2024-04-24 19:34:21.369020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:55.896 [2024-04-24 19:34:21.369030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.896 [2024-04-24 19:34:21.370821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.896 [2024-04-24 19:34:21.370863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:55.896 [2024-04-24 19:34:21.370873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.770 ms 00:19:55.896 [2024-04-24 19:34:21.370884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.896 [2024-04-24 19:34:21.370915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.896 [2024-04-24 19:34:21.370927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:55.896 [2024-04-24 19:34:21.370936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:55.896 [2024-04-24 19:34:21.370946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.896 [2024-04-24 19:34:21.370983] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:55.896 [2024-04-24 19:34:21.370997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.896 [2024-04-24 19:34:21.371008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:55.896 [2024-04-24 19:34:21.371020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:55.896 [2024-04-24 19:34:21.371029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.896 [2024-04-24 19:34:21.416582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.896 [2024-04-24 19:34:21.416670] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:55.896 [2024-04-24 19:34:21.416690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.609 ms 00:19:55.897 [2024-04-24 19:34:21.416699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.897 [2024-04-24 19:34:21.416880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.897 [2024-04-24 19:34:21.416895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:55.897 [2024-04-24 19:34:21.416908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:55.897 [2024-04-24 19:34:21.416916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.897 [2024-04-24 19:34:21.417985] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:55.897 [2024-04-24 19:34:21.424437] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 434.180 ms, result 0 00:19:55.897 [2024-04-24 19:34:21.425407] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:55.897 Some configs were skipped because the RPC state that can call them passed over. 00:19:55.897 19:34:21 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:56.156 [2024-04-24 19:34:21.709934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.156 [2024-04-24 19:34:21.710098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:56.156 [2024-04-24 19:34:21.710136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.020 ms 00:19:56.156 [2024-04-24 19:34:21.710165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.156 [2024-04-24 19:34:21.710225] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 44.316 ms, result 0 00:19:56.156 true 00:19:56.156 19:34:21 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:56.415 [2024-04-24 19:34:21.961253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.415 [2024-04-24 19:34:21.961323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:19:56.415 [2024-04-24 19:34:21.961341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.389 ms 00:19:56.415 [2024-04-24 19:34:21.961350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.415 [2024-04-24 19:34:21.961408] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 43.562 ms, result 0 00:19:56.415 true 00:19:56.415 19:34:21 -- ftl/trim.sh@81 -- # killprocess 79516 00:19:56.415 19:34:21 -- common/autotest_common.sh@936 -- # '[' -z 79516 ']' 00:19:56.415 19:34:21 -- common/autotest_common.sh@940 -- # kill -0 79516 00:19:56.415 19:34:21 -- common/autotest_common.sh@941 -- # uname 00:19:56.415 19:34:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:56.415 19:34:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79516 00:19:56.415 19:34:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:56.415 19:34:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:56.415 killing process with pid 79516 00:19:56.415 19:34:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79516' 00:19:56.415 19:34:22 -- common/autotest_common.sh@955 -- # kill 79516 00:19:56.415 19:34:22 -- common/autotest_common.sh@960 -- # wait 79516 00:19:57.796 [2024-04-24 19:34:23.273012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.796 [2024-04-24 19:34:23.273074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:57.796 [2024-04-24 19:34:23.273087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:57.796 [2024-04-24 19:34:23.273096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.796 [2024-04-24 19:34:23.273118] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:57.797 [2024-04-24 19:34:23.276891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.797 [2024-04-24 19:34:23.276941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:57.797 [2024-04-24 19:34:23.276954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.755 ms 00:19:57.797 [2024-04-24 19:34:23.276965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.797 [2024-04-24 19:34:23.277239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.797 [2024-04-24 19:34:23.277259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:57.797 [2024-04-24 19:34:23.277271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:19:57.797 [2024-04-24 19:34:23.277291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.797 [2024-04-24 19:34:23.280583] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.797 [2024-04-24 19:34:23.280616] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:57.797 [2024-04-24 19:34:23.280661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.274 ms 00:19:57.797 [2024-04-24 19:34:23.280672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.797 [2024-04-24 19:34:23.286454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.797 [2024-04-24 19:34:23.286489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:57.797 [2024-04-24 19:34:23.286503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.757 ms 00:19:57.797 [2024-04-24 19:34:23.286511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.797 [2024-04-24 19:34:23.303452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.797 [2024-04-24 19:34:23.303503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:57.797 [2024-04-24 19:34:23.303519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.895 ms 00:19:57.797 [2024-04-24 19:34:23.303528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.797 [2024-04-24 19:34:23.315239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.797 [2024-04-24 19:34:23.315293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:57.797 [2024-04-24 19:34:23.315308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.636 ms 00:19:57.797 [2024-04-24 19:34:23.315316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.797 [2024-04-24 19:34:23.315506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.797 [2024-04-24 19:34:23.315519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:57.797 [2024-04-24 19:34:23.315534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:57.797 [2024-04-24 19:34:23.315543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.797 [2024-04-24 19:34:23.333062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.797 [2024-04-24 19:34:23.333117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:57.797 [2024-04-24 19:34:23.333130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.519 ms 00:19:57.797 [2024-04-24 19:34:23.333138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.797 [2024-04-24 19:34:23.351172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.797 [2024-04-24 19:34:23.351234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:57.797 [2024-04-24 19:34:23.351253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.000 ms 00:19:57.797 [2024-04-24 19:34:23.351261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.797 [2024-04-24 19:34:23.368042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.797 [2024-04-24 19:34:23.368090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:57.797 [2024-04-24 19:34:23.368106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.741 ms 00:19:57.797 [2024-04-24 19:34:23.368114] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.797 [2024-04-24 19:34:23.384356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.797 [2024-04-24 19:34:23.384405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:57.797 [2024-04-24 19:34:23.384420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.194 ms 00:19:57.797 [2024-04-24 19:34:23.384427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.797 [2024-04-24 19:34:23.384490] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:57.797 [2024-04-24 19:34:23.384508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:57.797 [2024-04-24 19:34:23.384788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.384994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:57.798 [2024-04-24 19:34:23.385386] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:57.798 [2024-04-24 19:34:23.385396] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc9fe5a4-cec1-4dff-a75f-c7daf97221d9 00:19:57.798 [2024-04-24 19:34:23.385404] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:57.798 [2024-04-24 19:34:23.385415] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:57.798 [2024-04-24 19:34:23.385422] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:57.798 [2024-04-24 19:34:23.385431] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:57.798 [2024-04-24 19:34:23.385439] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:57.798 [2024-04-24 19:34:23.385451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:57.798 [2024-04-24 19:34:23.385458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:57.798 [2024-04-24 19:34:23.385465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:57.798 [2024-04-24 19:34:23.385472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:57.798 [2024-04-24 19:34:23.385482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.798 [2024-04-24 19:34:23.385489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:57.798 [2024-04-24 19:34:23.385499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:19:57.798 [2024-04-24 19:34:23.385505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.798 [2024-04-24 19:34:23.406672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.798 [2024-04-24 19:34:23.406725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:57.798 [2024-04-24 19:34:23.406741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.139 ms 00:19:57.798 [2024-04-24 19:34:23.406754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.798 [2024-04-24 19:34:23.407088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.798 [2024-04-24 19:34:23.407103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:57.798 [2024-04-24 19:34:23.407114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:19:57.798 [2024-04-24 19:34:23.407122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.058 [2024-04-24 19:34:23.480699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.058 [2024-04-24 19:34:23.480754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:58.058 [2024-04-24 19:34:23.480774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.058 [2024-04-24 19:34:23.480783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.058 [2024-04-24 19:34:23.480911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.058 [2024-04-24 19:34:23.480923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:58.058 [2024-04-24 19:34:23.480934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.058 [2024-04-24 19:34:23.480943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.058 [2024-04-24 19:34:23.481003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.059 [2024-04-24 19:34:23.481016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:58.059 [2024-04-24 19:34:23.481027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.059 [2024-04-24 19:34:23.481038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.059 [2024-04-24 19:34:23.481063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.059 [2024-04-24 19:34:23.481073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:58.059 [2024-04-24 19:34:23.481085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.059 [2024-04-24 19:34:23.481093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.059 [2024-04-24 19:34:23.627333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.059 [2024-04-24 19:34:23.627393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:58.059 [2024-04-24 19:34:23.627410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.059 [2024-04-24 19:34:23.627423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.059 [2024-04-24 19:34:23.683688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.059 [2024-04-24 19:34:23.683747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:58.059 [2024-04-24 19:34:23.683764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.059 [2024-04-24 19:34:23.683774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.059 [2024-04-24 19:34:23.683873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.059 [2024-04-24 19:34:23.683885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.059 [2024-04-24 19:34:23.683897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.059 [2024-04-24 19:34:23.683906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.059 [2024-04-24 19:34:23.683947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.059 [2024-04-24 19:34:23.683957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.059 [2024-04-24 19:34:23.683968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.059 [2024-04-24 19:34:23.683978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.059 [2024-04-24 19:34:23.684107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.059 [2024-04-24 19:34:23.684129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.059 [2024-04-24 19:34:23.684141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.059 [2024-04-24 19:34:23.684150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.059 [2024-04-24 19:34:23.684194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.059 [2024-04-24 19:34:23.684209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:58.059 [2024-04-24 19:34:23.684220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.059 [2024-04-24 19:34:23.684230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.059 [2024-04-24 19:34:23.684275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.059 [2024-04-24 19:34:23.684287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.059 [2024-04-24 19:34:23.684298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.059 [2024-04-24 19:34:23.684308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.059 [2024-04-24 19:34:23.684365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:58.059 [2024-04-24 19:34:23.684378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.059 [2024-04-24 19:34:23.684389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:58.059 [2024-04-24 19:34:23.684397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.059 [2024-04-24 19:34:23.684556] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 412.310 ms, result 0 00:19:59.967 19:34:25 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:59.967 19:34:25 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:59.967 [2024-04-24 19:34:25.315251] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:19:59.967 [2024-04-24 19:34:25.315387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79591 ] 00:19:59.967 [2024-04-24 19:34:25.483908] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.227 [2024-04-24 19:34:25.772061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.795 [2024-04-24 19:34:26.269438] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.795 [2024-04-24 19:34:26.269552] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.795 [2024-04-24 19:34:26.428301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.795 [2024-04-24 19:34:26.428398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:00.795 [2024-04-24 19:34:26.428417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:00.795 [2024-04-24 19:34:26.428432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.795 [2024-04-24 19:34:26.432136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.795 [2024-04-24 19:34:26.432182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.795 [2024-04-24 19:34:26.432195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.675 ms 00:20:00.795 [2024-04-24 19:34:26.432205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.795 [2024-04-24 19:34:26.432310] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:00.795 [2024-04-24 19:34:26.433660] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:00.795 [2024-04-24 19:34:26.433700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.795 [2024-04-24 19:34:26.433713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.795 [2024-04-24 19:34:26.433730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.403 ms 00:20:00.795 [2024-04-24 19:34:26.433740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.795 [2024-04-24 19:34:26.436426] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:00.795 [2024-04-24 19:34:26.458899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.795 [2024-04-24 19:34:26.458947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:00.795 [2024-04-24 19:34:26.458962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.515 ms 00:20:00.795 [2024-04-24 19:34:26.458973] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.795 [2024-04-24 19:34:26.459103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.795 [2024-04-24 19:34:26.459117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:00.795 [2024-04-24 19:34:26.459127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:00.795 [2024-04-24 19:34:26.459140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.056 [2024-04-24 19:34:26.473046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.056 [2024-04-24 19:34:26.473097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:01.056 [2024-04-24 19:34:26.473112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.870 ms 00:20:01.056 [2024-04-24 19:34:26.473122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.056 [2024-04-24 19:34:26.473335] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.056 [2024-04-24 19:34:26.473357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:01.056 [2024-04-24 19:34:26.473375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:20:01.056 [2024-04-24 19:34:26.473384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.056 [2024-04-24 19:34:26.473449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.056 [2024-04-24 19:34:26.473460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:01.056 [2024-04-24 19:34:26.473470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:01.056 [2024-04-24 19:34:26.473479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.057 [2024-04-24 19:34:26.473512] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:01.057 [2024-04-24 19:34:26.480722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.057 [2024-04-24 19:34:26.480757] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:01.057 [2024-04-24 19:34:26.480784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.235 ms 00:20:01.057 [2024-04-24 19:34:26.480795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.057 [2024-04-24 19:34:26.480862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.057 [2024-04-24 19:34:26.480879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:01.057 [2024-04-24 19:34:26.480889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:01.057 [2024-04-24 19:34:26.480898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.057 [2024-04-24 19:34:26.480922] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:01.057 [2024-04-24 19:34:26.480950] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:01.057 [2024-04-24 19:34:26.480989] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:01.057 [2024-04-24 19:34:26.481006] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:01.057 [2024-04-24 19:34:26.481095] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:01.057 [2024-04-24 19:34:26.481114] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:01.057 [2024-04-24 19:34:26.481127] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:01.057 [2024-04-24 19:34:26.481156] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:01.057 [2024-04-24 19:34:26.481167] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:01.057 [2024-04-24 19:34:26.481176] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:01.057 [2024-04-24 19:34:26.481186] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:01.057 [2024-04-24 19:34:26.481196] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:01.057 [2024-04-24 19:34:26.481204] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:01.057 [2024-04-24 19:34:26.481214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.057 [2024-04-24 19:34:26.481224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:01.057 [2024-04-24 19:34:26.481238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:20:01.057 [2024-04-24 19:34:26.481251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.057 [2024-04-24 19:34:26.481326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.057 [2024-04-24 19:34:26.481340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:01.057 [2024-04-24 19:34:26.481354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:01.057 [2024-04-24 19:34:26.481370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.057 [2024-04-24 19:34:26.481451] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:01.057 [2024-04-24 19:34:26.481469] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:01.057 [2024-04-24 19:34:26.481479] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.057 [2024-04-24 19:34:26.481494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.057 [2024-04-24 19:34:26.481504] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:01.057 [2024-04-24 19:34:26.481513] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:01.057 [2024-04-24 19:34:26.481522] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:01.057 [2024-04-24 19:34:26.481530] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:01.057 [2024-04-24 19:34:26.481539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:01.057 [2024-04-24 19:34:26.481547] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.057 [2024-04-24 19:34:26.481591] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:01.057 [2024-04-24 19:34:26.481602] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:01.057 [2024-04-24 19:34:26.481612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.057 [2024-04-24 19:34:26.481621] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:01.057 [2024-04-24 19:34:26.481630] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:01.057 [2024-04-24 19:34:26.481639] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.057 [2024-04-24 19:34:26.481660] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:01.057 [2024-04-24 19:34:26.481671] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:01.057 [2024-04-24 19:34:26.481680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.057 [2024-04-24 19:34:26.481689] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:01.057 [2024-04-24 19:34:26.481698] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:01.057 [2024-04-24 19:34:26.481708] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:01.057 [2024-04-24 19:34:26.481718] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:01.057 [2024-04-24 19:34:26.481728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:01.057 [2024-04-24 19:34:26.481737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:01.057 [2024-04-24 19:34:26.481746] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:01.057 [2024-04-24 19:34:26.481755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:01.057 [2024-04-24 19:34:26.481764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:01.057 [2024-04-24 19:34:26.481774] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:01.057 [2024-04-24 19:34:26.481783] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:01.057 [2024-04-24 19:34:26.481792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:01.057 [2024-04-24 19:34:26.481801] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:01.057 [2024-04-24 19:34:26.481810] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:01.057 [2024-04-24 19:34:26.481819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:01.057 [2024-04-24 19:34:26.481829] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:01.057 [2024-04-24 19:34:26.481839] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:01.057 [2024-04-24 19:34:26.481848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.057 [2024-04-24 19:34:26.481859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:01.057 [2024-04-24 19:34:26.481869] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:01.057 [2024-04-24 19:34:26.481878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.057 [2024-04-24 19:34:26.481887] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:01.057 [2024-04-24 19:34:26.481897] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:01.057 [2024-04-24 19:34:26.481908] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.057 [2024-04-24 19:34:26.481919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.057 [2024-04-24 19:34:26.481929] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:01.057 [2024-04-24 19:34:26.481939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:01.057 [2024-04-24 19:34:26.481948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:01.057 [2024-04-24 19:34:26.481957] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:01.057 [2024-04-24 19:34:26.481966] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:01.057 [2024-04-24 19:34:26.481976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:01.057 [2024-04-24 19:34:26.481987] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:01.057 [2024-04-24 19:34:26.482000] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.057 [2024-04-24 19:34:26.482011] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:01.057 [2024-04-24 19:34:26.482022] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:01.057 [2024-04-24 19:34:26.482032] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:01.057 [2024-04-24 19:34:26.482043] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:01.057 [2024-04-24 19:34:26.482053] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:01.057 [2024-04-24 19:34:26.482063] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:01.057 [2024-04-24 19:34:26.482073] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:01.057 [2024-04-24 19:34:26.482082] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:01.057 [2024-04-24 19:34:26.482092] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:01.057 [2024-04-24 19:34:26.482102] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:01.057 [2024-04-24 19:34:26.482112] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:01.058 [2024-04-24 19:34:26.482121] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:01.058 [2024-04-24 19:34:26.482132] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:01.058 [2024-04-24 19:34:26.482141] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:01.058 [2024-04-24 19:34:26.482154] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.058 [2024-04-24 19:34:26.482165] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:01.058 [2024-04-24 19:34:26.482176] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:01.058 [2024-04-24 19:34:26.482189] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:01.058 [2024-04-24 19:34:26.482200] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:01.058 [2024-04-24 19:34:26.482211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.058 [2024-04-24 19:34:26.482222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:01.058 [2024-04-24 19:34:26.482237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:20:01.058 [2024-04-24 19:34:26.482247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.058 [2024-04-24 19:34:26.514102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.058 [2024-04-24 19:34:26.514164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:01.058 [2024-04-24 19:34:26.514179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.847 ms 00:20:01.058 [2024-04-24 19:34:26.514188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.058 [2024-04-24 19:34:26.514390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.058 [2024-04-24 19:34:26.514406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:01.058 [2024-04-24 19:34:26.514419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:01.058 [2024-04-24 19:34:26.514430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.058 [2024-04-24 19:34:26.587495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.058 [2024-04-24 19:34:26.587562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:01.058 [2024-04-24 19:34:26.587578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.169 ms 00:20:01.058 [2024-04-24 19:34:26.587588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.058 [2024-04-24 19:34:26.587754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.058 [2024-04-24 19:34:26.587767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:01.058 [2024-04-24 19:34:26.587778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:01.058 [2024-04-24 19:34:26.587788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.058 [2024-04-24 19:34:26.588609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.058 [2024-04-24 19:34:26.588629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:01.058 [2024-04-24 19:34:26.588650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.797 ms 00:20:01.058 [2024-04-24 19:34:26.588659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.058 [2024-04-24 19:34:26.588814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.058 [2024-04-24 19:34:26.588830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:01.058 [2024-04-24 19:34:26.588841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:01.058 [2024-04-24 19:34:26.588851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.058 [2024-04-24 19:34:26.617731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.058 [2024-04-24 19:34:26.617795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:01.058 [2024-04-24 19:34:26.617810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.900 ms 00:20:01.058 [2024-04-24 19:34:26.617821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.058 [2024-04-24 19:34:26.639624] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:01.058 [2024-04-24 19:34:26.639697] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:01.058 [2024-04-24 19:34:26.639717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.058 [2024-04-24 19:34:26.639729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:01.058 [2024-04-24 19:34:26.639741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.745 ms 00:20:01.058 [2024-04-24 19:34:26.639750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.058 [2024-04-24 19:34:26.674630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.058 [2024-04-24 19:34:26.674693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:01.058 [2024-04-24 19:34:26.674709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.837 ms 00:20:01.058 [2024-04-24 19:34:26.674728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.058 [2024-04-24 19:34:26.695432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.058 [2024-04-24 19:34:26.695493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:01.058 [2024-04-24 19:34:26.695522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.628 ms 00:20:01.058 [2024-04-24 19:34:26.695532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.058 [2024-04-24 19:34:26.716017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.058 [2024-04-24 19:34:26.716066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:01.058 [2024-04-24 19:34:26.716081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.425 ms 00:20:01.058 [2024-04-24 19:34:26.716090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.058 [2024-04-24 19:34:26.716689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.058 [2024-04-24 19:34:26.716708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:01.058 [2024-04-24 19:34:26.716720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:20:01.058 [2024-04-24 19:34:26.716731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.338 [2024-04-24 19:34:26.828878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.338 [2024-04-24 19:34:26.828972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:01.338 [2024-04-24 19:34:26.828994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.328 ms 00:20:01.338 [2024-04-24 19:34:26.829006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.338 [2024-04-24 19:34:26.845894] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:01.338 [2024-04-24 19:34:26.875066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.338 [2024-04-24 19:34:26.875169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:01.338 [2024-04-24 19:34:26.875187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.922 ms 00:20:01.338 [2024-04-24 19:34:26.875206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.338 [2024-04-24 19:34:26.875399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.338 [2024-04-24 19:34:26.875412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:01.338 [2024-04-24 19:34:26.875423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:01.338 [2024-04-24 19:34:26.875434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.338 [2024-04-24 19:34:26.875519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.338 [2024-04-24 19:34:26.875534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:01.338 [2024-04-24 19:34:26.875549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:01.338 [2024-04-24 19:34:26.875558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.338 [2024-04-24 19:34:26.878097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.338 [2024-04-24 19:34:26.878126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:01.339 [2024-04-24 19:34:26.878136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.514 ms 00:20:01.339 [2024-04-24 19:34:26.878145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.339 [2024-04-24 19:34:26.878180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.339 [2024-04-24 19:34:26.878190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:01.339 [2024-04-24 19:34:26.878199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:01.339 [2024-04-24 19:34:26.878212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.339 [2024-04-24 19:34:26.878256] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:01.339 [2024-04-24 19:34:26.878267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.339 [2024-04-24 19:34:26.878282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:01.339 [2024-04-24 19:34:26.878295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:01.339 [2024-04-24 19:34:26.878308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.339 [2024-04-24 19:34:26.918243] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.339 [2024-04-24 19:34:26.918315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:01.339 [2024-04-24 19:34:26.918344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.972 ms 00:20:01.339 [2024-04-24 19:34:26.918370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.339 [2024-04-24 19:34:26.918527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.339 [2024-04-24 19:34:26.918540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:01.339 [2024-04-24 19:34:26.918550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:01.339 [2024-04-24 19:34:26.918560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.339 [2024-04-24 19:34:26.920042] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:01.339 [2024-04-24 19:34:26.926033] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 492.278 ms, result 0 00:20:01.339 [2024-04-24 19:34:26.926977] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:01.339 [2024-04-24 19:34:26.947762] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:09.898  Copying: 35/256 [MB] (35 MBps) Copying: 67/256 [MB] (31 MBps) Copying: 94/256 [MB] (27 MBps) Copying: 124/256 [MB] (29 MBps) Copying: 153/256 [MB] (28 MBps) Copying: 182/256 [MB] (29 MBps) Copying: 211/256 [MB] (29 MBps) Copying: 243/256 [MB] (31 MBps) Copying: 256/256 [MB] (average 30 MBps)[2024-04-24 19:34:35.354494] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:09.898 [2024-04-24 19:34:35.370923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.898 [2024-04-24 19:34:35.370984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:09.898 [2024-04-24 19:34:35.371002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:09.898 [2024-04-24 19:34:35.371011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.898 [2024-04-24 19:34:35.371039] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:09.898 [2024-04-24 19:34:35.375168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.898 [2024-04-24 19:34:35.375238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:09.898 [2024-04-24 19:34:35.375262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.119 ms 00:20:09.898 [2024-04-24 19:34:35.375270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.898 [2024-04-24 19:34:35.375545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.898 [2024-04-24 19:34:35.375563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:09.898 [2024-04-24 19:34:35.375575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:20:09.898 [2024-04-24 19:34:35.375583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.898 [2024-04-24 19:34:35.378679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.898 [2024-04-24 19:34:35.378706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:09.898 [2024-04-24 19:34:35.378715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.085 ms 00:20:09.898 [2024-04-24 19:34:35.378723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.898 [2024-04-24 19:34:35.385143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.898 [2024-04-24 19:34:35.385194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:09.898 [2024-04-24 19:34:35.385205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.410 ms 00:20:09.898 [2024-04-24 19:34:35.385215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.898 [2024-04-24 19:34:35.433095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.898 [2024-04-24 19:34:35.433170] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:09.898 [2024-04-24 19:34:35.433185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.887 ms 00:20:09.898 [2024-04-24 19:34:35.433194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.898 [2024-04-24 19:34:35.460978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.898 [2024-04-24 19:34:35.461055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:09.898 [2024-04-24 19:34:35.461071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.744 ms 00:20:09.898 [2024-04-24 19:34:35.461081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.898 [2024-04-24 19:34:35.461276] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.898 [2024-04-24 19:34:35.461321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:09.898 [2024-04-24 19:34:35.461332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:09.898 [2024-04-24 19:34:35.461341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.898 [2024-04-24 19:34:35.510209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.898 [2024-04-24 19:34:35.510286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:09.898 [2024-04-24 19:34:35.510303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.930 ms 00:20:09.898 [2024-04-24 19:34:35.510315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.898 [2024-04-24 19:34:35.559053] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.898 [2024-04-24 19:34:35.559121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:09.898 [2024-04-24 19:34:35.559135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.726 ms 00:20:09.898 [2024-04-24 19:34:35.559143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.161 [2024-04-24 19:34:35.605889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.161 [2024-04-24 19:34:35.605962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:10.161 [2024-04-24 19:34:35.605978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.735 ms 00:20:10.161 [2024-04-24 19:34:35.605986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.161 [2024-04-24 19:34:35.652594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.161 [2024-04-24 19:34:35.652693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:10.161 [2024-04-24 19:34:35.652723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.560 ms 00:20:10.161 [2024-04-24 19:34:35.652734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.161 [2024-04-24 19:34:35.652831] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:10.161 [2024-04-24 19:34:35.652854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.652869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.652883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.652899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.652916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.652930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.652939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.652948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.652957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.652966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.652976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.652984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.652995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:10.161 [2024-04-24 19:34:35.653504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:10.162 [2024-04-24 19:34:35.653897] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:10.162 [2024-04-24 19:34:35.653911] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc9fe5a4-cec1-4dff-a75f-c7daf97221d9 00:20:10.162 [2024-04-24 19:34:35.653921] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:10.162 [2024-04-24 19:34:35.653942] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:10.162 [2024-04-24 19:34:35.653950] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:10.162 [2024-04-24 19:34:35.653959] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:10.162 [2024-04-24 19:34:35.653967] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:10.162 [2024-04-24 19:34:35.653977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:10.162 [2024-04-24 19:34:35.653986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:10.162 [2024-04-24 19:34:35.653993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:10.162 [2024-04-24 19:34:35.654018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:10.162 [2024-04-24 19:34:35.654028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.162 [2024-04-24 19:34:35.654036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:10.162 [2024-04-24 19:34:35.654046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.201 ms 00:20:10.162 [2024-04-24 19:34:35.654057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.162 [2024-04-24 19:34:35.677748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.162 [2024-04-24 19:34:35.677818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:10.162 [2024-04-24 19:34:35.677834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.692 ms 00:20:10.162 [2024-04-24 19:34:35.677842] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.162 [2024-04-24 19:34:35.678244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.162 [2024-04-24 19:34:35.678262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:10.162 [2024-04-24 19:34:35.678273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:20:10.162 [2024-04-24 19:34:35.678293] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.162 [2024-04-24 19:34:35.746640] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.162 [2024-04-24 19:34:35.746700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:10.162 [2024-04-24 19:34:35.746715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.162 [2024-04-24 19:34:35.746724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.162 [2024-04-24 19:34:35.746844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.162 [2024-04-24 19:34:35.746857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:10.162 [2024-04-24 19:34:35.746866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.162 [2024-04-24 19:34:35.746881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.162 [2024-04-24 19:34:35.746948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.162 [2024-04-24 19:34:35.746963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:10.162 [2024-04-24 19:34:35.746973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.162 [2024-04-24 19:34:35.746983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.162 [2024-04-24 19:34:35.747005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.162 [2024-04-24 19:34:35.747015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:10.162 [2024-04-24 19:34:35.747025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.162 [2024-04-24 19:34:35.747034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.426 [2024-04-24 19:34:35.887619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.426 [2024-04-24 19:34:35.887689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:10.426 [2024-04-24 19:34:35.887705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.426 [2024-04-24 19:34:35.887715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.426 [2024-04-24 19:34:35.943883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.426 [2024-04-24 19:34:35.943942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:10.426 [2024-04-24 19:34:35.943956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.426 [2024-04-24 19:34:35.943977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.426 [2024-04-24 19:34:35.944052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.426 [2024-04-24 19:34:35.944064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:10.426 [2024-04-24 19:34:35.944074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.426 [2024-04-24 19:34:35.944082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.426 [2024-04-24 19:34:35.944115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.426 [2024-04-24 19:34:35.944125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:10.426 [2024-04-24 19:34:35.944134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.426 [2024-04-24 19:34:35.944144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.426 [2024-04-24 19:34:35.944272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.426 [2024-04-24 19:34:35.944289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:10.426 [2024-04-24 19:34:35.944299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.426 [2024-04-24 19:34:35.944309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.426 [2024-04-24 19:34:35.944352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.426 [2024-04-24 19:34:35.944374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:10.426 [2024-04-24 19:34:35.944384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.426 [2024-04-24 19:34:35.944398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.426 [2024-04-24 19:34:35.944443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.426 [2024-04-24 19:34:35.944455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:10.426 [2024-04-24 19:34:35.944465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.426 [2024-04-24 19:34:35.944474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.426 [2024-04-24 19:34:35.944525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:10.426 [2024-04-24 19:34:35.944537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:10.426 [2024-04-24 19:34:35.944547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:10.426 [2024-04-24 19:34:35.944557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.426 [2024-04-24 19:34:35.944754] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 574.937 ms, result 0 00:20:11.832 00:20:11.832 00:20:11.832 19:34:37 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:11.832 19:34:37 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:12.403 19:34:38 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:12.661 [2024-04-24 19:34:38.145776] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:12.661 [2024-04-24 19:34:38.145923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79728 ] 00:20:12.661 [2024-04-24 19:34:38.321093] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.920 [2024-04-24 19:34:38.596086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.488 [2024-04-24 19:34:39.066043] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.488 [2024-04-24 19:34:39.066128] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:13.749 [2024-04-24 19:34:39.226663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.749 [2024-04-24 19:34:39.226721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:13.749 [2024-04-24 19:34:39.226736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:13.749 [2024-04-24 19:34:39.226749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.749 [2024-04-24 19:34:39.230254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.749 [2024-04-24 19:34:39.230304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:13.749 [2024-04-24 19:34:39.230328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.488 ms 00:20:13.749 [2024-04-24 19:34:39.230345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.749 [2024-04-24 19:34:39.230475] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:13.749 [2024-04-24 19:34:39.231921] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:13.749 [2024-04-24 19:34:39.231957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.749 [2024-04-24 19:34:39.231968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:13.749 [2024-04-24 19:34:39.231982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.496 ms 00:20:13.749 [2024-04-24 19:34:39.231992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.749 [2024-04-24 19:34:39.233560] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:13.749 [2024-04-24 19:34:39.257720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.749 [2024-04-24 19:34:39.257786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:13.749 [2024-04-24 19:34:39.257801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.203 ms 00:20:13.749 [2024-04-24 19:34:39.257811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.749 [2024-04-24 19:34:39.257993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.749 [2024-04-24 19:34:39.258009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:13.749 [2024-04-24 19:34:39.258020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:13.749 [2024-04-24 19:34:39.258033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.749 [2024-04-24 19:34:39.265859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.749 [2024-04-24 19:34:39.265902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:13.749 [2024-04-24 19:34:39.265914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.788 ms 00:20:13.749 [2024-04-24 19:34:39.265924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.749 [2024-04-24 19:34:39.266083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.749 [2024-04-24 19:34:39.266110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:13.749 [2024-04-24 19:34:39.266127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:20:13.749 [2024-04-24 19:34:39.266136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.749 [2024-04-24 19:34:39.266176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.749 [2024-04-24 19:34:39.266186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:13.749 [2024-04-24 19:34:39.266196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:13.749 [2024-04-24 19:34:39.266204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.749 [2024-04-24 19:34:39.266233] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:13.749 [2024-04-24 19:34:39.273116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.749 [2024-04-24 19:34:39.273161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:13.749 [2024-04-24 19:34:39.273172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.905 ms 00:20:13.749 [2024-04-24 19:34:39.273182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.749 [2024-04-24 19:34:39.273279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.749 [2024-04-24 19:34:39.273296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:13.749 [2024-04-24 19:34:39.273306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:13.749 [2024-04-24 19:34:39.273315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.750 [2024-04-24 19:34:39.273339] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:13.750 [2024-04-24 19:34:39.273362] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:13.750 [2024-04-24 19:34:39.273401] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:13.750 [2024-04-24 19:34:39.273427] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:13.750 [2024-04-24 19:34:39.273509] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:13.750 [2024-04-24 19:34:39.273522] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:13.750 [2024-04-24 19:34:39.273535] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:13.750 [2024-04-24 19:34:39.273547] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:13.750 [2024-04-24 19:34:39.273558] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:13.750 [2024-04-24 19:34:39.273568] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:13.750 [2024-04-24 19:34:39.273578] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:13.750 [2024-04-24 19:34:39.273587] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:13.750 [2024-04-24 19:34:39.273595] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:13.750 [2024-04-24 19:34:39.273604] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.750 [2024-04-24 19:34:39.273613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:13.750 [2024-04-24 19:34:39.273628] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:13.750 [2024-04-24 19:34:39.273652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.750 [2024-04-24 19:34:39.273723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.750 [2024-04-24 19:34:39.273733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:13.750 [2024-04-24 19:34:39.273743] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:13.750 [2024-04-24 19:34:39.273752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.750 [2024-04-24 19:34:39.273836] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:13.750 [2024-04-24 19:34:39.273849] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:13.750 [2024-04-24 19:34:39.273858] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:13.750 [2024-04-24 19:34:39.273871] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.750 [2024-04-24 19:34:39.273881] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:13.750 [2024-04-24 19:34:39.273889] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:13.750 [2024-04-24 19:34:39.273898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:13.750 [2024-04-24 19:34:39.273906] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:13.750 [2024-04-24 19:34:39.273916] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:13.750 [2024-04-24 19:34:39.273935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:13.750 [2024-04-24 19:34:39.273962] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:13.750 [2024-04-24 19:34:39.273971] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:13.750 [2024-04-24 19:34:39.273979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:13.750 [2024-04-24 19:34:39.273986] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:13.750 [2024-04-24 19:34:39.273994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:13.750 [2024-04-24 19:34:39.274001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.750 [2024-04-24 19:34:39.274009] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:13.750 [2024-04-24 19:34:39.274017] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:13.750 [2024-04-24 19:34:39.274024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.750 [2024-04-24 19:34:39.274031] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:13.750 [2024-04-24 19:34:39.274039] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:13.750 [2024-04-24 19:34:39.274046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:13.750 [2024-04-24 19:34:39.274054] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:13.750 [2024-04-24 19:34:39.274061] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:13.750 [2024-04-24 19:34:39.274069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:13.750 [2024-04-24 19:34:39.274076] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:13.750 [2024-04-24 19:34:39.274083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:13.750 [2024-04-24 19:34:39.274090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:13.750 [2024-04-24 19:34:39.274097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:13.750 [2024-04-24 19:34:39.274104] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:13.750 [2024-04-24 19:34:39.274111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:13.750 [2024-04-24 19:34:39.274118] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:13.750 [2024-04-24 19:34:39.274126] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:13.750 [2024-04-24 19:34:39.274132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:13.750 [2024-04-24 19:34:39.274140] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:13.750 [2024-04-24 19:34:39.274147] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:13.750 [2024-04-24 19:34:39.274154] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:13.750 [2024-04-24 19:34:39.274161] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:13.750 [2024-04-24 19:34:39.274169] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:13.750 [2024-04-24 19:34:39.274175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:13.750 [2024-04-24 19:34:39.274182] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:13.750 [2024-04-24 19:34:39.274191] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:13.750 [2024-04-24 19:34:39.274199] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:13.750 [2024-04-24 19:34:39.274206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:13.750 [2024-04-24 19:34:39.274214] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:13.750 [2024-04-24 19:34:39.274222] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:13.750 [2024-04-24 19:34:39.274229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:13.750 [2024-04-24 19:34:39.274237] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:13.750 [2024-04-24 19:34:39.274244] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:13.750 [2024-04-24 19:34:39.274252] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:13.750 [2024-04-24 19:34:39.274261] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:13.750 [2024-04-24 19:34:39.274271] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:13.750 [2024-04-24 19:34:39.274280] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:13.750 [2024-04-24 19:34:39.274288] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:13.750 [2024-04-24 19:34:39.274296] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:13.750 [2024-04-24 19:34:39.274303] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:13.750 [2024-04-24 19:34:39.274311] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:13.750 [2024-04-24 19:34:39.274319] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:13.750 [2024-04-24 19:34:39.274327] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:13.750 [2024-04-24 19:34:39.274335] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:13.750 [2024-04-24 19:34:39.274342] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:13.750 [2024-04-24 19:34:39.274351] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:13.750 [2024-04-24 19:34:39.274359] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:13.750 [2024-04-24 19:34:39.274366] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:13.750 [2024-04-24 19:34:39.274374] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:13.751 [2024-04-24 19:34:39.274382] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:13.751 [2024-04-24 19:34:39.274390] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:13.751 [2024-04-24 19:34:39.274399] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:13.751 [2024-04-24 19:34:39.274407] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:13.751 [2024-04-24 19:34:39.274415] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:13.751 [2024-04-24 19:34:39.274423] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:13.751 [2024-04-24 19:34:39.274431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.751 [2024-04-24 19:34:39.274440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:13.751 [2024-04-24 19:34:39.274452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:20:13.751 [2024-04-24 19:34:39.274460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.751 [2024-04-24 19:34:39.302611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.751 [2024-04-24 19:34:39.302681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:13.751 [2024-04-24 19:34:39.302696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.143 ms 00:20:13.751 [2024-04-24 19:34:39.302705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.751 [2024-04-24 19:34:39.302883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.751 [2024-04-24 19:34:39.302897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:13.751 [2024-04-24 19:34:39.302907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:13.751 [2024-04-24 19:34:39.302916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.751 [2024-04-24 19:34:39.373863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.751 [2024-04-24 19:34:39.373922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:13.751 [2024-04-24 19:34:39.373939] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.055 ms 00:20:13.751 [2024-04-24 19:34:39.373952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.751 [2024-04-24 19:34:39.374083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.751 [2024-04-24 19:34:39.374098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:13.751 [2024-04-24 19:34:39.374112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:13.751 [2024-04-24 19:34:39.374125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.751 [2024-04-24 19:34:39.374626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.751 [2024-04-24 19:34:39.374666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:13.751 [2024-04-24 19:34:39.374677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:20:13.751 [2024-04-24 19:34:39.374685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.751 [2024-04-24 19:34:39.374823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.751 [2024-04-24 19:34:39.374841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:13.751 [2024-04-24 19:34:39.374850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:13.751 [2024-04-24 19:34:39.374859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:13.751 [2024-04-24 19:34:39.401886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:13.751 [2024-04-24 19:34:39.401942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:13.751 [2024-04-24 19:34:39.401958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.045 ms 00:20:13.751 [2024-04-24 19:34:39.401967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.011 [2024-04-24 19:34:39.426816] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:14.011 [2024-04-24 19:34:39.426892] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:14.011 [2024-04-24 19:34:39.426917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.011 [2024-04-24 19:34:39.426930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:14.011 [2024-04-24 19:34:39.426946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.825 ms 00:20:14.011 [2024-04-24 19:34:39.426957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.011 [2024-04-24 19:34:39.465757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.012 [2024-04-24 19:34:39.465829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:14.012 [2024-04-24 19:34:39.465846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.713 ms 00:20:14.012 [2024-04-24 19:34:39.465868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.012 [2024-04-24 19:34:39.489618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.012 [2024-04-24 19:34:39.489695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:14.012 [2024-04-24 19:34:39.489710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.625 ms 00:20:14.012 [2024-04-24 19:34:39.489718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.012 [2024-04-24 19:34:39.512047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.012 [2024-04-24 19:34:39.512107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:14.012 [2024-04-24 19:34:39.512122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.244 ms 00:20:14.012 [2024-04-24 19:34:39.512131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.012 [2024-04-24 19:34:39.512761] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.012 [2024-04-24 19:34:39.512787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:14.012 [2024-04-24 19:34:39.512798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:20:14.012 [2024-04-24 19:34:39.512807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.012 [2024-04-24 19:34:39.618514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.012 [2024-04-24 19:34:39.618585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:14.012 [2024-04-24 19:34:39.618602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.878 ms 00:20:14.012 [2024-04-24 19:34:39.618612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.012 [2024-04-24 19:34:39.634926] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:14.012 [2024-04-24 19:34:39.653345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.012 [2024-04-24 19:34:39.653406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:14.012 [2024-04-24 19:34:39.653420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.631 ms 00:20:14.012 [2024-04-24 19:34:39.653429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.012 [2024-04-24 19:34:39.653546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.012 [2024-04-24 19:34:39.653560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:14.012 [2024-04-24 19:34:39.653570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:14.012 [2024-04-24 19:34:39.653579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.012 [2024-04-24 19:34:39.653656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.012 [2024-04-24 19:34:39.653669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:14.012 [2024-04-24 19:34:39.653682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:14.012 [2024-04-24 19:34:39.653692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.012 [2024-04-24 19:34:39.655606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.012 [2024-04-24 19:34:39.655645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:14.012 [2024-04-24 19:34:39.655656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.895 ms 00:20:14.012 [2024-04-24 19:34:39.655666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.012 [2024-04-24 19:34:39.655700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.012 [2024-04-24 19:34:39.655710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:14.012 [2024-04-24 19:34:39.655720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:14.012 [2024-04-24 19:34:39.655734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.012 [2024-04-24 19:34:39.655771] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:14.012 [2024-04-24 19:34:39.655783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.012 [2024-04-24 19:34:39.655791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:14.012 [2024-04-24 19:34:39.655800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:14.012 [2024-04-24 19:34:39.655808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.272 [2024-04-24 19:34:39.700684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.272 [2024-04-24 19:34:39.700744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:14.272 [2024-04-24 19:34:39.700769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.933 ms 00:20:14.272 [2024-04-24 19:34:39.700777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.272 [2024-04-24 19:34:39.700941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.272 [2024-04-24 19:34:39.700954] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:14.272 [2024-04-24 19:34:39.700965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:14.272 [2024-04-24 19:34:39.700974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.272 [2024-04-24 19:34:39.702148] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.272 [2024-04-24 19:34:39.708307] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 476.070 ms, result 0 00:20:14.272 [2024-04-24 19:34:39.708985] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.272 [2024-04-24 19:34:39.730128] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:14.272  Copying: 4096/4096 [kB] (average 31 MBps)[2024-04-24 19:34:39.863775] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:14.272 [2024-04-24 19:34:39.880254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.272 [2024-04-24 19:34:39.880305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:14.272 [2024-04-24 19:34:39.880319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:14.272 [2024-04-24 19:34:39.880329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.272 [2024-04-24 19:34:39.880362] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:14.272 [2024-04-24 19:34:39.884388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.272 [2024-04-24 19:34:39.884426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:14.272 [2024-04-24 19:34:39.884447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.015 ms 00:20:14.272 [2024-04-24 19:34:39.884456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.272 [2024-04-24 19:34:39.886400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.272 [2024-04-24 19:34:39.886436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:14.272 [2024-04-24 19:34:39.886447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.920 ms 00:20:14.272 [2024-04-24 19:34:39.886456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.272 [2024-04-24 19:34:39.890127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.272 [2024-04-24 19:34:39.890164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:14.272 [2024-04-24 19:34:39.890173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.658 ms 00:20:14.272 [2024-04-24 19:34:39.890185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.272 [2024-04-24 19:34:39.896835] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.272 [2024-04-24 19:34:39.896877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:14.272 [2024-04-24 19:34:39.896887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.614 ms 00:20:14.272 [2024-04-24 19:34:39.896895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.272 [2024-04-24 19:34:39.941869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.272 [2024-04-24 19:34:39.941925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:14.272 [2024-04-24 19:34:39.941940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.001 ms 00:20:14.272 [2024-04-24 19:34:39.941948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.533 [2024-04-24 19:34:39.967575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.533 [2024-04-24 19:34:39.967627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:14.533 [2024-04-24 19:34:39.967650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.587 ms 00:20:14.533 [2024-04-24 19:34:39.967660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.533 [2024-04-24 19:34:39.967837] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.533 [2024-04-24 19:34:39.967866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:14.533 [2024-04-24 19:34:39.967877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:20:14.533 [2024-04-24 19:34:39.967886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.533 [2024-04-24 19:34:40.013796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.533 [2024-04-24 19:34:40.013852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:14.533 [2024-04-24 19:34:40.013866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.975 ms 00:20:14.533 [2024-04-24 19:34:40.013875] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.533 [2024-04-24 19:34:40.057764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.533 [2024-04-24 19:34:40.057826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:14.533 [2024-04-24 19:34:40.057843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.892 ms 00:20:14.533 [2024-04-24 19:34:40.057854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.533 [2024-04-24 19:34:40.102692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.533 [2024-04-24 19:34:40.102754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:14.533 [2024-04-24 19:34:40.102769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.836 ms 00:20:14.533 [2024-04-24 19:34:40.102778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.533 [2024-04-24 19:34:40.149133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.533 [2024-04-24 19:34:40.149191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:14.533 [2024-04-24 19:34:40.149206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.313 ms 00:20:14.533 [2024-04-24 19:34:40.149215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.533 [2024-04-24 19:34:40.149299] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:14.533 [2024-04-24 19:34:40.149319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:14.533 [2024-04-24 19:34:40.149855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.149999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:14.534 [2024-04-24 19:34:40.150279] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:14.534 [2024-04-24 19:34:40.150293] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc9fe5a4-cec1-4dff-a75f-c7daf97221d9 00:20:14.534 [2024-04-24 19:34:40.150302] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:14.534 [2024-04-24 19:34:40.150311] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:14.534 [2024-04-24 19:34:40.150320] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:14.534 [2024-04-24 19:34:40.150329] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:14.534 [2024-04-24 19:34:40.150337] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:14.534 [2024-04-24 19:34:40.150348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:14.534 [2024-04-24 19:34:40.150357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:14.534 [2024-04-24 19:34:40.150365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:14.534 [2024-04-24 19:34:40.150372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:14.534 [2024-04-24 19:34:40.150382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.534 [2024-04-24 19:34:40.150391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:14.534 [2024-04-24 19:34:40.150400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.087 ms 00:20:14.534 [2024-04-24 19:34:40.150409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.534 [2024-04-24 19:34:40.174272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.534 [2024-04-24 19:34:40.174324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:14.534 [2024-04-24 19:34:40.174339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.878 ms 00:20:14.534 [2024-04-24 19:34:40.174348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.534 [2024-04-24 19:34:40.174688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:14.534 [2024-04-24 19:34:40.174706] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:14.534 [2024-04-24 19:34:40.174726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:20:14.534 [2024-04-24 19:34:40.174735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.792 [2024-04-24 19:34:40.243986] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.792 [2024-04-24 19:34:40.244047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:14.792 [2024-04-24 19:34:40.244062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.792 [2024-04-24 19:34:40.244072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.792 [2024-04-24 19:34:40.244174] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.792 [2024-04-24 19:34:40.244187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:14.792 [2024-04-24 19:34:40.244202] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.792 [2024-04-24 19:34:40.244211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.792 [2024-04-24 19:34:40.244269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.792 [2024-04-24 19:34:40.244284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:14.792 [2024-04-24 19:34:40.244294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.792 [2024-04-24 19:34:40.244303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.792 [2024-04-24 19:34:40.244325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.792 [2024-04-24 19:34:40.244334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:14.792 [2024-04-24 19:34:40.244344] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.792 [2024-04-24 19:34:40.244353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.792 [2024-04-24 19:34:40.387931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.792 [2024-04-24 19:34:40.387989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:14.792 [2024-04-24 19:34:40.388004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.792 [2024-04-24 19:34:40.388014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.792 [2024-04-24 19:34:40.445342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.792 [2024-04-24 19:34:40.445421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:14.792 [2024-04-24 19:34:40.445434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.792 [2024-04-24 19:34:40.445455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.792 [2024-04-24 19:34:40.445527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.792 [2024-04-24 19:34:40.445538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:14.792 [2024-04-24 19:34:40.445548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.792 [2024-04-24 19:34:40.445556] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.792 [2024-04-24 19:34:40.445588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.792 [2024-04-24 19:34:40.445598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:14.792 [2024-04-24 19:34:40.445609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.792 [2024-04-24 19:34:40.445619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.792 [2024-04-24 19:34:40.445769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.792 [2024-04-24 19:34:40.445785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:14.792 [2024-04-24 19:34:40.445795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.792 [2024-04-24 19:34:40.445805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.792 [2024-04-24 19:34:40.445845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.792 [2024-04-24 19:34:40.445859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:14.792 [2024-04-24 19:34:40.445868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.792 [2024-04-24 19:34:40.445881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.792 [2024-04-24 19:34:40.445924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.792 [2024-04-24 19:34:40.445934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:14.792 [2024-04-24 19:34:40.445946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.792 [2024-04-24 19:34:40.445954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.792 [2024-04-24 19:34:40.446004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:14.792 [2024-04-24 19:34:40.446023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:14.792 [2024-04-24 19:34:40.446033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:14.792 [2024-04-24 19:34:40.446043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:14.793 [2024-04-24 19:34:40.446200] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 567.032 ms, result 0 00:20:16.701 00:20:16.701 00:20:16.701 19:34:41 -- ftl/trim.sh@93 -- # svcpid=79765 00:20:16.701 19:34:41 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:16.701 19:34:41 -- ftl/trim.sh@94 -- # waitforlisten 79765 00:20:16.701 19:34:41 -- common/autotest_common.sh@817 -- # '[' -z 79765 ']' 00:20:16.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.701 19:34:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.701 19:34:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:16.701 19:34:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.701 19:34:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:16.701 19:34:42 -- common/autotest_common.sh@10 -- # set +x 00:20:16.701 [2024-04-24 19:34:42.108630] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:16.701 [2024-04-24 19:34:42.108786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79765 ] 00:20:16.701 [2024-04-24 19:34:42.269884] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.960 [2024-04-24 19:34:42.567743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:18.395 19:34:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:18.395 19:34:43 -- common/autotest_common.sh@850 -- # return 0 00:20:18.395 19:34:43 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:18.395 [2024-04-24 19:34:43.856692] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:18.395 [2024-04-24 19:34:43.856778] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:18.395 [2024-04-24 19:34:44.031041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.395 [2024-04-24 19:34:44.031110] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:18.395 [2024-04-24 19:34:44.031131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:18.395 [2024-04-24 19:34:44.031140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.395 [2024-04-24 19:34:44.034659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.396 [2024-04-24 19:34:44.034701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.396 [2024-04-24 19:34:44.034716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.502 ms 00:20:18.396 [2024-04-24 19:34:44.034728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.396 [2024-04-24 19:34:44.034856] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:18.396 [2024-04-24 19:34:44.036221] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:18.396 [2024-04-24 19:34:44.036262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.396 [2024-04-24 19:34:44.036275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.396 [2024-04-24 19:34:44.036287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.424 ms 00:20:18.396 [2024-04-24 19:34:44.036296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.396 [2024-04-24 19:34:44.037815] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:18.396 [2024-04-24 19:34:44.061548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.396 [2024-04-24 19:34:44.061615] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:18.396 [2024-04-24 19:34:44.061644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.782 ms 00:20:18.396 [2024-04-24 19:34:44.061657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.396 [2024-04-24 19:34:44.061791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.396 [2024-04-24 19:34:44.061811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:18.396 [2024-04-24 19:34:44.061822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:18.396 [2024-04-24 19:34:44.061834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.396 [2024-04-24 19:34:44.069305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.396 [2024-04-24 19:34:44.069373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.396 [2024-04-24 19:34:44.069388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.428 ms 00:20:18.396 [2024-04-24 19:34:44.069404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.396 [2024-04-24 19:34:44.069537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.396 [2024-04-24 19:34:44.069560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.396 [2024-04-24 19:34:44.069573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:18.396 [2024-04-24 19:34:44.069586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.396 [2024-04-24 19:34:44.069628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.396 [2024-04-24 19:34:44.069661] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:18.396 [2024-04-24 19:34:44.069671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:18.396 [2024-04-24 19:34:44.069682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.396 [2024-04-24 19:34:44.069712] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:18.656 [2024-04-24 19:34:44.076451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.656 [2024-04-24 19:34:44.076490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.656 [2024-04-24 19:34:44.076503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.757 ms 00:20:18.656 [2024-04-24 19:34:44.076512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.656 [2024-04-24 19:34:44.076597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.656 [2024-04-24 19:34:44.076609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:18.656 [2024-04-24 19:34:44.076623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:18.656 [2024-04-24 19:34:44.076647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.657 [2024-04-24 19:34:44.076675] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:18.657 [2024-04-24 19:34:44.076697] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:18.657 [2024-04-24 19:34:44.076736] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:18.657 [2024-04-24 19:34:44.076769] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:18.657 [2024-04-24 19:34:44.076853] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:18.657 [2024-04-24 19:34:44.076866] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:18.657 [2024-04-24 19:34:44.076880] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:18.657 [2024-04-24 19:34:44.076891] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:18.657 [2024-04-24 19:34:44.076904] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:18.657 [2024-04-24 19:34:44.076913] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:18.657 [2024-04-24 19:34:44.076924] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:18.657 [2024-04-24 19:34:44.076934] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:18.657 [2024-04-24 19:34:44.076944] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:18.657 [2024-04-24 19:34:44.076954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.657 [2024-04-24 19:34:44.076969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:18.657 [2024-04-24 19:34:44.076980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:18.657 [2024-04-24 19:34:44.076991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.657 [2024-04-24 19:34:44.077061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.657 [2024-04-24 19:34:44.077073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:18.657 [2024-04-24 19:34:44.077083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:18.657 [2024-04-24 19:34:44.077093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.657 [2024-04-24 19:34:44.077177] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:18.657 [2024-04-24 19:34:44.077190] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:18.657 [2024-04-24 19:34:44.077202] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.657 [2024-04-24 19:34:44.077214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.657 [2024-04-24 19:34:44.077223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:18.657 [2024-04-24 19:34:44.077233] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:18.657 [2024-04-24 19:34:44.077242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:18.657 [2024-04-24 19:34:44.077265] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:18.657 [2024-04-24 19:34:44.077274] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:18.657 [2024-04-24 19:34:44.077286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.657 [2024-04-24 19:34:44.077295] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:18.657 [2024-04-24 19:34:44.077305] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:18.657 [2024-04-24 19:34:44.077324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:18.657 [2024-04-24 19:34:44.077334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:18.657 [2024-04-24 19:34:44.077342] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:18.657 [2024-04-24 19:34:44.077352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.657 [2024-04-24 19:34:44.077359] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:18.657 [2024-04-24 19:34:44.077368] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:18.657 [2024-04-24 19:34:44.077376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.657 [2024-04-24 19:34:44.077397] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:18.657 [2024-04-24 19:34:44.077404] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:18.657 [2024-04-24 19:34:44.077413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:18.657 [2024-04-24 19:34:44.077421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:18.657 [2024-04-24 19:34:44.077430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:18.657 [2024-04-24 19:34:44.077438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:18.657 [2024-04-24 19:34:44.077449] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:18.657 [2024-04-24 19:34:44.077457] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:18.657 [2024-04-24 19:34:44.077468] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:18.657 [2024-04-24 19:34:44.077475] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:18.657 [2024-04-24 19:34:44.077485] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:18.657 [2024-04-24 19:34:44.077492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:18.657 [2024-04-24 19:34:44.077501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:18.657 [2024-04-24 19:34:44.077509] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:18.657 [2024-04-24 19:34:44.077535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:18.657 [2024-04-24 19:34:44.077544] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:18.657 [2024-04-24 19:34:44.077555] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:18.657 [2024-04-24 19:34:44.077562] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.657 [2024-04-24 19:34:44.077573] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:18.657 [2024-04-24 19:34:44.077581] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:18.657 [2024-04-24 19:34:44.077591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:18.657 [2024-04-24 19:34:44.077598] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:18.657 [2024-04-24 19:34:44.077612] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:18.657 [2024-04-24 19:34:44.077621] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:18.657 [2024-04-24 19:34:44.077632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:18.657 [2024-04-24 19:34:44.077641] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:18.657 [2024-04-24 19:34:44.077651] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:18.657 [2024-04-24 19:34:44.077669] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:18.657 [2024-04-24 19:34:44.077681] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:18.657 [2024-04-24 19:34:44.077688] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:18.657 [2024-04-24 19:34:44.077698] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:18.657 [2024-04-24 19:34:44.077708] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:18.657 [2024-04-24 19:34:44.077721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.657 [2024-04-24 19:34:44.077731] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:18.657 [2024-04-24 19:34:44.077742] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:18.657 [2024-04-24 19:34:44.077751] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:18.657 [2024-04-24 19:34:44.077763] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:18.657 [2024-04-24 19:34:44.077771] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:18.657 [2024-04-24 19:34:44.077784] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:18.657 [2024-04-24 19:34:44.077793] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:18.657 [2024-04-24 19:34:44.077804] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:18.657 [2024-04-24 19:34:44.077813] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:18.657 [2024-04-24 19:34:44.077824] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:18.657 [2024-04-24 19:34:44.077833] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:18.657 [2024-04-24 19:34:44.077843] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:18.657 [2024-04-24 19:34:44.077852] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:18.657 [2024-04-24 19:34:44.077862] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:18.657 [2024-04-24 19:34:44.077871] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:18.657 [2024-04-24 19:34:44.077885] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:18.657 [2024-04-24 19:34:44.077895] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:18.657 [2024-04-24 19:34:44.077905] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:18.657 [2024-04-24 19:34:44.077914] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:18.657 [2024-04-24 19:34:44.077925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.657 [2024-04-24 19:34:44.077937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:18.657 [2024-04-24 19:34:44.077986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.793 ms 00:20:18.657 [2024-04-24 19:34:44.077997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.657 [2024-04-24 19:34:44.106148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.657 [2024-04-24 19:34:44.106202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.657 [2024-04-24 19:34:44.106219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.146 ms 00:20:18.658 [2024-04-24 19:34:44.106228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.658 [2024-04-24 19:34:44.106416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.658 [2024-04-24 19:34:44.106428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:18.658 [2024-04-24 19:34:44.106440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:18.658 [2024-04-24 19:34:44.106448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.658 [2024-04-24 19:34:44.167866] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.658 [2024-04-24 19:34:44.167935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.658 [2024-04-24 19:34:44.167954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.506 ms 00:20:18.658 [2024-04-24 19:34:44.167965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.658 [2024-04-24 19:34:44.168087] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.658 [2024-04-24 19:34:44.168101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.658 [2024-04-24 19:34:44.168118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:18.658 [2024-04-24 19:34:44.168129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.658 [2024-04-24 19:34:44.168597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.658 [2024-04-24 19:34:44.168620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.658 [2024-04-24 19:34:44.168646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:20:18.658 [2024-04-24 19:34:44.168656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.658 [2024-04-24 19:34:44.168784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.658 [2024-04-24 19:34:44.168803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.658 [2024-04-24 19:34:44.168815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:20:18.658 [2024-04-24 19:34:44.168824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.658 [2024-04-24 19:34:44.195911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.658 [2024-04-24 19:34:44.195978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.658 [2024-04-24 19:34:44.196015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.108 ms 00:20:18.658 [2024-04-24 19:34:44.196028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.658 [2024-04-24 19:34:44.220352] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:18.658 [2024-04-24 19:34:44.220422] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:18.658 [2024-04-24 19:34:44.220446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.658 [2024-04-24 19:34:44.220458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:18.658 [2024-04-24 19:34:44.220476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.255 ms 00:20:18.658 [2024-04-24 19:34:44.220486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.658 [2024-04-24 19:34:44.257667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.658 [2024-04-24 19:34:44.257749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:18.658 [2024-04-24 19:34:44.257768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.107 ms 00:20:18.658 [2024-04-24 19:34:44.257778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.658 [2024-04-24 19:34:44.281287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.658 [2024-04-24 19:34:44.281354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:18.658 [2024-04-24 19:34:44.281369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.418 ms 00:20:18.658 [2024-04-24 19:34:44.281393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.658 [2024-04-24 19:34:44.304779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.658 [2024-04-24 19:34:44.304841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:18.658 [2024-04-24 19:34:44.304858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.307 ms 00:20:18.658 [2024-04-24 19:34:44.304867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.658 [2024-04-24 19:34:44.305463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.658 [2024-04-24 19:34:44.305490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:18.658 [2024-04-24 19:34:44.305502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:20:18.658 [2024-04-24 19:34:44.305512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.918 [2024-04-24 19:34:44.412549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.918 [2024-04-24 19:34:44.412620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:18.918 [2024-04-24 19:34:44.412656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.206 ms 00:20:18.918 [2024-04-24 19:34:44.412666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.918 [2024-04-24 19:34:44.429969] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:18.918 [2024-04-24 19:34:44.448362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.918 [2024-04-24 19:34:44.448437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:18.918 [2024-04-24 19:34:44.448454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.617 ms 00:20:18.918 [2024-04-24 19:34:44.448484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.918 [2024-04-24 19:34:44.448606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.918 [2024-04-24 19:34:44.448621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:18.918 [2024-04-24 19:34:44.448657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:18.918 [2024-04-24 19:34:44.448672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.918 [2024-04-24 19:34:44.448755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.918 [2024-04-24 19:34:44.448774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:18.918 [2024-04-24 19:34:44.448784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:18.918 [2024-04-24 19:34:44.448795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.918 [2024-04-24 19:34:44.450681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.918 [2024-04-24 19:34:44.450719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:18.918 [2024-04-24 19:34:44.450729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.864 ms 00:20:18.918 [2024-04-24 19:34:44.450741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.918 [2024-04-24 19:34:44.450771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.918 [2024-04-24 19:34:44.450784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:18.918 [2024-04-24 19:34:44.450793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:18.918 [2024-04-24 19:34:44.450804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.918 [2024-04-24 19:34:44.450842] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:18.918 [2024-04-24 19:34:44.450858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.918 [2024-04-24 19:34:44.450866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:18.918 [2024-04-24 19:34:44.450879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:18.918 [2024-04-24 19:34:44.450888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.918 [2024-04-24 19:34:44.498381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.918 [2024-04-24 19:34:44.498448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:18.918 [2024-04-24 19:34:44.498467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.551 ms 00:20:18.918 [2024-04-24 19:34:44.498493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.918 [2024-04-24 19:34:44.498673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.918 [2024-04-24 19:34:44.498690] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:18.918 [2024-04-24 19:34:44.498714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:18.918 [2024-04-24 19:34:44.498722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.918 [2024-04-24 19:34:44.499755] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.918 [2024-04-24 19:34:44.506233] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 469.300 ms, result 0 00:20:18.918 [2024-04-24 19:34:44.507388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:18.918 Some configs were skipped because the RPC state that can call them passed over. 00:20:18.918 19:34:44 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:19.177 [2024-04-24 19:34:44.787295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.177 [2024-04-24 19:34:44.787376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:20:19.177 [2024-04-24 19:34:44.787392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.973 ms 00:20:19.177 [2024-04-24 19:34:44.787408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.177 [2024-04-24 19:34:44.787454] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 47.143 ms, result 0 00:20:19.177 true 00:20:19.178 19:34:44 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:19.437 [2024-04-24 19:34:45.047817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.437 [2024-04-24 19:34:45.047890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:20:19.437 [2024-04-24 19:34:45.047907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.466 ms 00:20:19.437 [2024-04-24 19:34:45.047917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.437 [2024-04-24 19:34:45.047966] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 48.631 ms, result 0 00:20:19.437 true 00:20:19.437 19:34:45 -- ftl/trim.sh@102 -- # killprocess 79765 00:20:19.437 19:34:45 -- common/autotest_common.sh@936 -- # '[' -z 79765 ']' 00:20:19.437 19:34:45 -- common/autotest_common.sh@940 -- # kill -0 79765 00:20:19.437 19:34:45 -- common/autotest_common.sh@941 -- # uname 00:20:19.437 19:34:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:19.437 19:34:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79765 00:20:19.696 19:34:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:19.696 19:34:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:19.696 killing process with pid 79765 00:20:19.696 19:34:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79765' 00:20:19.696 19:34:45 -- common/autotest_common.sh@955 -- # kill 79765 00:20:19.696 19:34:45 -- common/autotest_common.sh@960 -- # wait 79765 00:20:21.078 [2024-04-24 19:34:46.341989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.078 [2024-04-24 19:34:46.342050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:21.078 [2024-04-24 19:34:46.342063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:21.078 [2024-04-24 19:34:46.342073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.078 [2024-04-24 19:34:46.342094] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:21.078 [2024-04-24 19:34:46.345734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.078 [2024-04-24 19:34:46.345771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:21.078 [2024-04-24 19:34:46.345785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.629 ms 00:20:21.078 [2024-04-24 19:34:46.345793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.078 [2024-04-24 19:34:46.346050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.078 [2024-04-24 19:34:46.346071] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:21.078 [2024-04-24 19:34:46.346082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:20:21.078 [2024-04-24 19:34:46.346103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.078 [2024-04-24 19:34:46.350754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.078 [2024-04-24 19:34:46.350793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:21.078 [2024-04-24 19:34:46.350808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.633 ms 00:20:21.078 [2024-04-24 19:34:46.350817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.078 [2024-04-24 19:34:46.356814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.078 [2024-04-24 19:34:46.356848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:21.078 [2024-04-24 19:34:46.356862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.966 ms 00:20:21.078 [2024-04-24 19:34:46.356870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.078 [2024-04-24 19:34:46.372978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.078 [2024-04-24 19:34:46.373012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:21.078 [2024-04-24 19:34:46.373025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.087 ms 00:20:21.078 [2024-04-24 19:34:46.373033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.078 [2024-04-24 19:34:46.384910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.078 [2024-04-24 19:34:46.384945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:21.078 [2024-04-24 19:34:46.384958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.845 ms 00:20:21.078 [2024-04-24 19:34:46.384966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.078 [2024-04-24 19:34:46.385104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.078 [2024-04-24 19:34:46.385125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:21.078 [2024-04-24 19:34:46.385136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:21.078 [2024-04-24 19:34:46.385144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.078 [2024-04-24 19:34:46.401981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.078 [2024-04-24 19:34:46.402013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:21.078 [2024-04-24 19:34:46.402025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.848 ms 00:20:21.078 [2024-04-24 19:34:46.402033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.078 [2024-04-24 19:34:46.418554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.078 [2024-04-24 19:34:46.418586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:21.078 [2024-04-24 19:34:46.418601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.506 ms 00:20:21.078 [2024-04-24 19:34:46.418608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.078 [2024-04-24 19:34:46.434917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.078 [2024-04-24 19:34:46.434948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:21.078 [2024-04-24 19:34:46.434959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.280 ms 00:20:21.078 [2024-04-24 19:34:46.434965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.078 [2024-04-24 19:34:46.451074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.078 [2024-04-24 19:34:46.451106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:21.078 [2024-04-24 19:34:46.451118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.080 ms 00:20:21.078 [2024-04-24 19:34:46.451125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.078 [2024-04-24 19:34:46.451169] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:21.078 [2024-04-24 19:34:46.451186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:21.078 [2024-04-24 19:34:46.451205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:21.078 [2024-04-24 19:34:46.451213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:21.078 [2024-04-24 19:34:46.451223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:21.078 [2024-04-24 19:34:46.451232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:21.078 [2024-04-24 19:34:46.451241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:21.078 [2024-04-24 19:34:46.451250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:21.078 [2024-04-24 19:34:46.451262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:21.078 [2024-04-24 19:34:46.451270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:21.078 [2024-04-24 19:34:46.451279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:21.078 [2024-04-24 19:34:46.451288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:21.078 [2024-04-24 19:34:46.451297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:21.078 [2024-04-24 19:34:46.451304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.451995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.452016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.452029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.452038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.452048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.452055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.452065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.452073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.452082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.452090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.452100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:21.079 [2024-04-24 19:34:46.452116] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:21.079 [2024-04-24 19:34:46.452126] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc9fe5a4-cec1-4dff-a75f-c7daf97221d9 00:20:21.079 [2024-04-24 19:34:46.452135] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:21.079 [2024-04-24 19:34:46.452145] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:21.080 [2024-04-24 19:34:46.452153] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:21.080 [2024-04-24 19:34:46.452163] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:21.080 [2024-04-24 19:34:46.452173] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:21.080 [2024-04-24 19:34:46.452182] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:21.080 [2024-04-24 19:34:46.452191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:21.080 [2024-04-24 19:34:46.452200] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:21.080 [2024-04-24 19:34:46.452207] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:21.080 [2024-04-24 19:34:46.452216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.080 [2024-04-24 19:34:46.452223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:21.080 [2024-04-24 19:34:46.452234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:20:21.080 [2024-04-24 19:34:46.452241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.472909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.080 [2024-04-24 19:34:46.472944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:21.080 [2024-04-24 19:34:46.472959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.681 ms 00:20:21.080 [2024-04-24 19:34:46.472967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.473248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.080 [2024-04-24 19:34:46.473265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:21.080 [2024-04-24 19:34:46.473275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:20:21.080 [2024-04-24 19:34:46.473283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.545780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.080 [2024-04-24 19:34:46.545848] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:21.080 [2024-04-24 19:34:46.545868] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.080 [2024-04-24 19:34:46.545878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.546019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.080 [2024-04-24 19:34:46.546033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:21.080 [2024-04-24 19:34:46.546045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.080 [2024-04-24 19:34:46.546054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.546121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.080 [2024-04-24 19:34:46.546135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:21.080 [2024-04-24 19:34:46.546149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.080 [2024-04-24 19:34:46.546158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.546183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.080 [2024-04-24 19:34:46.546192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:21.080 [2024-04-24 19:34:46.546203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.080 [2024-04-24 19:34:46.546211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.681426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.080 [2024-04-24 19:34:46.681481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:21.080 [2024-04-24 19:34:46.681500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.080 [2024-04-24 19:34:46.681509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.734742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.080 [2024-04-24 19:34:46.734799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:21.080 [2024-04-24 19:34:46.734815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.080 [2024-04-24 19:34:46.734825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.734912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.080 [2024-04-24 19:34:46.734923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:21.080 [2024-04-24 19:34:46.734934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.080 [2024-04-24 19:34:46.734946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.734980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.080 [2024-04-24 19:34:46.734990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:21.080 [2024-04-24 19:34:46.735000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.080 [2024-04-24 19:34:46.735009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.735119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.080 [2024-04-24 19:34:46.735140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:21.080 [2024-04-24 19:34:46.735152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.080 [2024-04-24 19:34:46.735160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.735214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.080 [2024-04-24 19:34:46.735251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:21.080 [2024-04-24 19:34:46.735263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.080 [2024-04-24 19:34:46.735273] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.735317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.080 [2024-04-24 19:34:46.735328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:21.080 [2024-04-24 19:34:46.735339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.080 [2024-04-24 19:34:46.735348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.735406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.080 [2024-04-24 19:34:46.735418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:21.080 [2024-04-24 19:34:46.735429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.080 [2024-04-24 19:34:46.735437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.080 [2024-04-24 19:34:46.735588] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 394.330 ms, result 0 00:20:22.988 19:34:48 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:22.988 [2024-04-24 19:34:48.371245] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:22.988 [2024-04-24 19:34:48.371388] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79846 ] 00:20:22.988 [2024-04-24 19:34:48.539814] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.246 [2024-04-24 19:34:48.813855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.835 [2024-04-24 19:34:49.287817] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:23.835 [2024-04-24 19:34:49.287896] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:23.835 [2024-04-24 19:34:49.445266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.835 [2024-04-24 19:34:49.445334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:23.835 [2024-04-24 19:34:49.445350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:23.835 [2024-04-24 19:34:49.445363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.835 [2024-04-24 19:34:49.448751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.835 [2024-04-24 19:34:49.448815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:23.835 [2024-04-24 19:34:49.448827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.370 ms 00:20:23.835 [2024-04-24 19:34:49.448836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.835 [2024-04-24 19:34:49.448985] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:23.835 [2024-04-24 19:34:49.450293] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:23.835 [2024-04-24 19:34:49.450330] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.835 [2024-04-24 19:34:49.450340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:23.835 [2024-04-24 19:34:49.450354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.358 ms 00:20:23.835 [2024-04-24 19:34:49.450364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.835 [2024-04-24 19:34:49.451981] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:23.835 [2024-04-24 19:34:49.476010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.835 [2024-04-24 19:34:49.476080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:23.835 [2024-04-24 19:34:49.476096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.073 ms 00:20:23.835 [2024-04-24 19:34:49.476105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.835 [2024-04-24 19:34:49.476306] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.835 [2024-04-24 19:34:49.476332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:23.835 [2024-04-24 19:34:49.476342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:23.835 [2024-04-24 19:34:49.476355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.835 [2024-04-24 19:34:49.484111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.835 [2024-04-24 19:34:49.484153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:23.835 [2024-04-24 19:34:49.484165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.720 ms 00:20:23.835 [2024-04-24 19:34:49.484175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.835 [2024-04-24 19:34:49.484327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.835 [2024-04-24 19:34:49.484359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:23.835 [2024-04-24 19:34:49.484374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:23.835 [2024-04-24 19:34:49.484384] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.835 [2024-04-24 19:34:49.484427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.835 [2024-04-24 19:34:49.484444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:23.835 [2024-04-24 19:34:49.484456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:23.835 [2024-04-24 19:34:49.484469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.835 [2024-04-24 19:34:49.484498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:23.835 [2024-04-24 19:34:49.491248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.835 [2024-04-24 19:34:49.491299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:23.835 [2024-04-24 19:34:49.491313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.771 ms 00:20:23.835 [2024-04-24 19:34:49.491322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.835 [2024-04-24 19:34:49.491424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.835 [2024-04-24 19:34:49.491449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:23.835 [2024-04-24 19:34:49.491460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:23.835 [2024-04-24 19:34:49.491469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.835 [2024-04-24 19:34:49.491495] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:23.835 [2024-04-24 19:34:49.491524] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:23.836 [2024-04-24 19:34:49.491568] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:23.836 [2024-04-24 19:34:49.491593] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:23.836 [2024-04-24 19:34:49.491690] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:23.836 [2024-04-24 19:34:49.491711] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:23.836 [2024-04-24 19:34:49.491725] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:23.836 [2024-04-24 19:34:49.491739] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:23.836 [2024-04-24 19:34:49.491752] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:23.836 [2024-04-24 19:34:49.491763] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:23.836 [2024-04-24 19:34:49.491772] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:23.836 [2024-04-24 19:34:49.491780] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:23.836 [2024-04-24 19:34:49.491789] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:23.836 [2024-04-24 19:34:49.491799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.836 [2024-04-24 19:34:49.491808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:23.836 [2024-04-24 19:34:49.491822] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:20:23.836 [2024-04-24 19:34:49.491836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.836 [2024-04-24 19:34:49.491907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.836 [2024-04-24 19:34:49.491923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:23.836 [2024-04-24 19:34:49.491933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:23.836 [2024-04-24 19:34:49.491943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:23.836 [2024-04-24 19:34:49.492025] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:23.836 [2024-04-24 19:34:49.492045] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:23.836 [2024-04-24 19:34:49.492055] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:23.836 [2024-04-24 19:34:49.492068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.836 [2024-04-24 19:34:49.492079] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:23.836 [2024-04-24 19:34:49.492089] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:23.836 [2024-04-24 19:34:49.492099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:23.836 [2024-04-24 19:34:49.492108] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:23.836 [2024-04-24 19:34:49.492118] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:23.836 [2024-04-24 19:34:49.492127] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:23.836 [2024-04-24 19:34:49.492163] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:23.836 [2024-04-24 19:34:49.492173] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:23.836 [2024-04-24 19:34:49.492182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:23.836 [2024-04-24 19:34:49.492191] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:23.836 [2024-04-24 19:34:49.492200] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:20:23.836 [2024-04-24 19:34:49.492207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.836 [2024-04-24 19:34:49.492215] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:23.836 [2024-04-24 19:34:49.492224] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:20:23.836 [2024-04-24 19:34:49.492232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.836 [2024-04-24 19:34:49.492240] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:23.836 [2024-04-24 19:34:49.492248] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:20:23.836 [2024-04-24 19:34:49.492257] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:23.836 [2024-04-24 19:34:49.492265] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:23.836 [2024-04-24 19:34:49.492274] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:23.836 [2024-04-24 19:34:49.492282] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:23.836 [2024-04-24 19:34:49.492289] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:23.836 [2024-04-24 19:34:49.492297] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:20:23.836 [2024-04-24 19:34:49.492305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:23.836 [2024-04-24 19:34:49.492313] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:23.836 [2024-04-24 19:34:49.492321] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:23.836 [2024-04-24 19:34:49.492329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:23.836 [2024-04-24 19:34:49.492336] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:23.836 [2024-04-24 19:34:49.492345] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:20:23.836 [2024-04-24 19:34:49.492353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:23.836 [2024-04-24 19:34:49.492361] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:23.836 [2024-04-24 19:34:49.492369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:23.836 [2024-04-24 19:34:49.492376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:23.836 [2024-04-24 19:34:49.492385] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:23.836 [2024-04-24 19:34:49.492393] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:20:23.836 [2024-04-24 19:34:49.492400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:23.836 [2024-04-24 19:34:49.492407] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:23.836 [2024-04-24 19:34:49.492418] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:23.836 [2024-04-24 19:34:49.492429] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:23.836 [2024-04-24 19:34:49.492438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:23.836 [2024-04-24 19:34:49.492447] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:23.836 [2024-04-24 19:34:49.492472] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:23.836 [2024-04-24 19:34:49.492481] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:23.837 [2024-04-24 19:34:49.492489] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:23.837 [2024-04-24 19:34:49.492497] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:23.837 [2024-04-24 19:34:49.492504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:23.837 [2024-04-24 19:34:49.492513] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:23.837 [2024-04-24 19:34:49.492527] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.837 [2024-04-24 19:34:49.492537] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:23.837 [2024-04-24 19:34:49.492546] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:20:23.837 [2024-04-24 19:34:49.492554] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:20:23.837 [2024-04-24 19:34:49.492563] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:20:23.837 [2024-04-24 19:34:49.492572] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:20:23.837 [2024-04-24 19:34:49.492579] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:20:23.837 [2024-04-24 19:34:49.492588] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:20:23.837 [2024-04-24 19:34:49.492597] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:20:23.837 [2024-04-24 19:34:49.492608] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:20:23.837 [2024-04-24 19:34:49.492617] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:20:23.837 [2024-04-24 19:34:49.492624] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:20:23.837 [2024-04-24 19:34:49.492645] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:20:23.837 [2024-04-24 19:34:49.492655] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:20:23.837 [2024-04-24 19:34:49.492665] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:23.837 [2024-04-24 19:34:49.492675] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:23.837 [2024-04-24 19:34:49.492685] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:23.837 [2024-04-24 19:34:49.492694] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:23.837 [2024-04-24 19:34:49.492703] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:23.837 [2024-04-24 19:34:49.492711] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:23.837 [2024-04-24 19:34:49.492720] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:23.837 [2024-04-24 19:34:49.492730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:23.837 [2024-04-24 19:34:49.492744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:20:23.837 [2024-04-24 19:34:49.492752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.101 [2024-04-24 19:34:49.521481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.101 [2024-04-24 19:34:49.521542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:24.101 [2024-04-24 19:34:49.521557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.721 ms 00:20:24.101 [2024-04-24 19:34:49.521566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.101 [2024-04-24 19:34:49.521758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.101 [2024-04-24 19:34:49.521778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:24.101 [2024-04-24 19:34:49.521789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:24.101 [2024-04-24 19:34:49.521798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.101 [2024-04-24 19:34:49.595550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.101 [2024-04-24 19:34:49.595612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:24.101 [2024-04-24 19:34:49.595627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.865 ms 00:20:24.101 [2024-04-24 19:34:49.595646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.101 [2024-04-24 19:34:49.595764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.101 [2024-04-24 19:34:49.595779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:24.101 [2024-04-24 19:34:49.595789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:24.101 [2024-04-24 19:34:49.595798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.101 [2024-04-24 19:34:49.596284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.102 [2024-04-24 19:34:49.596312] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:24.102 [2024-04-24 19:34:49.596323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:20:24.102 [2024-04-24 19:34:49.596331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.102 [2024-04-24 19:34:49.596472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.102 [2024-04-24 19:34:49.596497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:24.102 [2024-04-24 19:34:49.596507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:20:24.102 [2024-04-24 19:34:49.596516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.102 [2024-04-24 19:34:49.623001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.102 [2024-04-24 19:34:49.623058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:24.102 [2024-04-24 19:34:49.623072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.506 ms 00:20:24.102 [2024-04-24 19:34:49.623082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.102 [2024-04-24 19:34:49.647275] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:24.102 [2024-04-24 19:34:49.647345] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:24.102 [2024-04-24 19:34:49.647364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.102 [2024-04-24 19:34:49.647374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:24.102 [2024-04-24 19:34:49.647389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.128 ms 00:20:24.102 [2024-04-24 19:34:49.647398] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.102 [2024-04-24 19:34:49.685323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.102 [2024-04-24 19:34:49.685404] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:24.102 [2024-04-24 19:34:49.685420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.839 ms 00:20:24.102 [2024-04-24 19:34:49.685445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.102 [2024-04-24 19:34:49.710274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.102 [2024-04-24 19:34:49.710343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:24.102 [2024-04-24 19:34:49.710358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.636 ms 00:20:24.102 [2024-04-24 19:34:49.710368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.102 [2024-04-24 19:34:49.733927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.102 [2024-04-24 19:34:49.734008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:24.102 [2024-04-24 19:34:49.734023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.441 ms 00:20:24.102 [2024-04-24 19:34:49.734032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.102 [2024-04-24 19:34:49.734715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.102 [2024-04-24 19:34:49.734746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:24.102 [2024-04-24 19:34:49.734757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:20:24.102 [2024-04-24 19:34:49.734782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.361 [2024-04-24 19:34:49.842608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.361 [2024-04-24 19:34:49.842718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:24.361 [2024-04-24 19:34:49.842738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.998 ms 00:20:24.361 [2024-04-24 19:34:49.842747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.361 [2024-04-24 19:34:49.860361] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:24.361 [2024-04-24 19:34:49.879326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.361 [2024-04-24 19:34:49.879397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:24.361 [2024-04-24 19:34:49.879413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.471 ms 00:20:24.361 [2024-04-24 19:34:49.879423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.361 [2024-04-24 19:34:49.879558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.361 [2024-04-24 19:34:49.879578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:24.361 [2024-04-24 19:34:49.879589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:24.361 [2024-04-24 19:34:49.879598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.361 [2024-04-24 19:34:49.879667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.361 [2024-04-24 19:34:49.879684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:24.361 [2024-04-24 19:34:49.879697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:24.361 [2024-04-24 19:34:49.879706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.361 [2024-04-24 19:34:49.881568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.361 [2024-04-24 19:34:49.881606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:24.361 [2024-04-24 19:34:49.881616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.838 ms 00:20:24.361 [2024-04-24 19:34:49.881624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.361 [2024-04-24 19:34:49.881670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.361 [2024-04-24 19:34:49.881681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:24.361 [2024-04-24 19:34:49.881690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:24.361 [2024-04-24 19:34:49.881704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.361 [2024-04-24 19:34:49.881740] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:24.361 [2024-04-24 19:34:49.881758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.361 [2024-04-24 19:34:49.881767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:24.361 [2024-04-24 19:34:49.881777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:24.361 [2024-04-24 19:34:49.881785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.361 [2024-04-24 19:34:49.929129] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.361 [2024-04-24 19:34:49.929206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:24.361 [2024-04-24 19:34:49.929235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.407 ms 00:20:24.361 [2024-04-24 19:34:49.929245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.361 [2024-04-24 19:34:49.929442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:24.361 [2024-04-24 19:34:49.929466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:24.361 [2024-04-24 19:34:49.929477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:24.361 [2024-04-24 19:34:49.929486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:24.361 [2024-04-24 19:34:49.930570] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:24.361 [2024-04-24 19:34:49.937667] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 485.903 ms, result 0 00:20:24.361 [2024-04-24 19:34:49.938511] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:24.361 [2024-04-24 19:34:49.960538] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:32.894  Copying: 36/256 [MB] (36 MBps) Copying: 67/256 [MB] (31 MBps) Copying: 98/256 [MB] (30 MBps) Copying: 129/256 [MB] (31 MBps) Copying: 162/256 [MB] (32 MBps) Copying: 194/256 [MB] (32 MBps) Copying: 225/256 [MB] (31 MBps) Copying: 256/256 [MB] (average 32 MBps)[2024-04-24 19:34:58.558977] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:33.155 [2024-04-24 19:34:58.576216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-04-24 19:34:58.576280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:33.155 [2024-04-24 19:34:58.576295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:33.155 [2024-04-24 19:34:58.576304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-04-24 19:34:58.576332] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:33.155 [2024-04-24 19:34:58.580590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-04-24 19:34:58.580674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:33.155 [2024-04-24 19:34:58.580700] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.248 ms 00:20:33.155 [2024-04-24 19:34:58.580709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-04-24 19:34:58.580997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-04-24 19:34:58.581015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:33.155 [2024-04-24 19:34:58.581024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:20:33.155 [2024-04-24 19:34:58.581031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-04-24 19:34:58.584064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-04-24 19:34:58.584095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:33.155 [2024-04-24 19:34:58.584104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.024 ms 00:20:33.155 [2024-04-24 19:34:58.584112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-04-24 19:34:58.590122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-04-24 19:34:58.590161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:33.155 [2024-04-24 19:34:58.590172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.998 ms 00:20:33.155 [2024-04-24 19:34:58.590180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-04-24 19:34:58.632814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-04-24 19:34:58.632876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:33.155 [2024-04-24 19:34:58.632889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.648 ms 00:20:33.155 [2024-04-24 19:34:58.632897] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-04-24 19:34:58.657281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-04-24 19:34:58.657336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:33.155 [2024-04-24 19:34:58.657351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.363 ms 00:20:33.155 [2024-04-24 19:34:58.657359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-04-24 19:34:58.657530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-04-24 19:34:58.657570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:33.155 [2024-04-24 19:34:58.657580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:33.155 [2024-04-24 19:34:58.657588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-04-24 19:34:58.697419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-04-24 19:34:58.697467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:33.155 [2024-04-24 19:34:58.697479] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.887 ms 00:20:33.155 [2024-04-24 19:34:58.697487] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-04-24 19:34:58.734903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-04-24 19:34:58.734951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:33.155 [2024-04-24 19:34:58.734964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.442 ms 00:20:33.155 [2024-04-24 19:34:58.734971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-04-24 19:34:58.773126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-04-24 19:34:58.773174] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:33.155 [2024-04-24 19:34:58.773185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.184 ms 00:20:33.155 [2024-04-24 19:34:58.773192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-04-24 19:34:58.809381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.155 [2024-04-24 19:34:58.809430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:33.155 [2024-04-24 19:34:58.809442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.182 ms 00:20:33.155 [2024-04-24 19:34:58.809449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.155 [2024-04-24 19:34:58.809492] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:33.155 [2024-04-24 19:34:58.809507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:33.155 [2024-04-24 19:34:58.809770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.809995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:33.156 [2024-04-24 19:34:58.810321] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:33.156 [2024-04-24 19:34:58.810333] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bc9fe5a4-cec1-4dff-a75f-c7daf97221d9 00:20:33.156 [2024-04-24 19:34:58.810341] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:33.156 [2024-04-24 19:34:58.810348] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:33.156 [2024-04-24 19:34:58.810356] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:33.156 [2024-04-24 19:34:58.810363] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:33.156 [2024-04-24 19:34:58.810371] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:33.156 [2024-04-24 19:34:58.810378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:33.156 [2024-04-24 19:34:58.810386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:33.156 [2024-04-24 19:34:58.810392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:33.156 [2024-04-24 19:34:58.810399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:33.156 [2024-04-24 19:34:58.810406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.156 [2024-04-24 19:34:58.810414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:33.156 [2024-04-24 19:34:58.810423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.917 ms 00:20:33.156 [2024-04-24 19:34:58.810431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.416 [2024-04-24 19:34:58.830150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.416 [2024-04-24 19:34:58.830201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:33.416 [2024-04-24 19:34:58.830212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.732 ms 00:20:33.416 [2024-04-24 19:34:58.830220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.416 [2024-04-24 19:34:58.830510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:33.416 [2024-04-24 19:34:58.830527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:33.416 [2024-04-24 19:34:58.830536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:20:33.416 [2024-04-24 19:34:58.830548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.416 [2024-04-24 19:34:58.887239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.416 [2024-04-24 19:34:58.887291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:33.416 [2024-04-24 19:34:58.887303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.416 [2024-04-24 19:34:58.887310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.416 [2024-04-24 19:34:58.887397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.416 [2024-04-24 19:34:58.887406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:33.416 [2024-04-24 19:34:58.887413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.416 [2024-04-24 19:34:58.887426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.416 [2024-04-24 19:34:58.887476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.416 [2024-04-24 19:34:58.887487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:33.416 [2024-04-24 19:34:58.887495] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.416 [2024-04-24 19:34:58.887503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.416 [2024-04-24 19:34:58.887521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.416 [2024-04-24 19:34:58.887529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:33.416 [2024-04-24 19:34:58.887536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.416 [2024-04-24 19:34:58.887543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.416 [2024-04-24 19:34:59.003016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.416 [2024-04-24 19:34:59.003072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:33.416 [2024-04-24 19:34:59.003084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.416 [2024-04-24 19:34:59.003091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.416 [2024-04-24 19:34:59.048959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.416 [2024-04-24 19:34:59.049011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:33.417 [2024-04-24 19:34:59.049023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.417 [2024-04-24 19:34:59.049036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.417 [2024-04-24 19:34:59.049095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.417 [2024-04-24 19:34:59.049104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:33.417 [2024-04-24 19:34:59.049112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.417 [2024-04-24 19:34:59.049119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.417 [2024-04-24 19:34:59.049144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.417 [2024-04-24 19:34:59.049152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:33.417 [2024-04-24 19:34:59.049160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.417 [2024-04-24 19:34:59.049168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.417 [2024-04-24 19:34:59.049260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.417 [2024-04-24 19:34:59.049272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:33.417 [2024-04-24 19:34:59.049279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.417 [2024-04-24 19:34:59.049287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.417 [2024-04-24 19:34:59.049321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.417 [2024-04-24 19:34:59.049330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:33.417 [2024-04-24 19:34:59.049338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.417 [2024-04-24 19:34:59.049349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.417 [2024-04-24 19:34:59.049386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.417 [2024-04-24 19:34:59.049394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:33.417 [2024-04-24 19:34:59.049402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.417 [2024-04-24 19:34:59.049408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.417 [2024-04-24 19:34:59.049449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:33.417 [2024-04-24 19:34:59.049458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:33.417 [2024-04-24 19:34:59.049466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:33.417 [2024-04-24 19:34:59.049473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:33.417 [2024-04-24 19:34:59.049622] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 474.325 ms, result 0 00:20:34.794 00:20:34.794 00:20:34.794 19:35:00 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:35.361 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:35.361 19:35:00 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:35.361 19:35:00 -- ftl/trim.sh@109 -- # fio_kill 00:20:35.361 19:35:00 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:35.361 19:35:00 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:35.361 19:35:00 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:35.361 19:35:00 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:35.361 19:35:00 -- ftl/trim.sh@20 -- # killprocess 79765 00:20:35.361 19:35:00 -- common/autotest_common.sh@936 -- # '[' -z 79765 ']' 00:20:35.361 19:35:00 -- common/autotest_common.sh@940 -- # kill -0 79765 00:20:35.361 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79765) - No such process 00:20:35.361 Process with pid 79765 is not found 00:20:35.361 19:35:00 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79765 is not found' 00:20:35.361 00:20:35.361 real 1m13.137s 00:20:35.361 user 1m44.928s 00:20:35.361 sys 0m6.972s 00:20:35.361 19:35:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:35.361 19:35:00 -- common/autotest_common.sh@10 -- # set +x 00:20:35.361 ************************************ 00:20:35.361 END TEST ftl_trim 00:20:35.361 ************************************ 00:20:35.361 19:35:01 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:35.361 19:35:01 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:20:35.361 19:35:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:35.361 19:35:01 -- common/autotest_common.sh@10 -- # set +x 00:20:35.619 ************************************ 00:20:35.619 START TEST ftl_restore 00:20:35.619 ************************************ 00:20:35.620 19:35:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:35.620 * Looking for test storage... 00:20:35.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:35.620 19:35:01 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:35.620 19:35:01 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:35.620 19:35:01 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:35.620 19:35:01 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:35.620 19:35:01 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:35.620 19:35:01 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:35.620 19:35:01 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.620 19:35:01 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:35.620 19:35:01 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:35.620 19:35:01 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.620 19:35:01 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.620 19:35:01 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:35.620 19:35:01 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:35.620 19:35:01 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:35.620 19:35:01 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:35.620 19:35:01 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:35.620 19:35:01 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:35.620 19:35:01 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.620 19:35:01 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.620 19:35:01 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:35.620 19:35:01 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:35.620 19:35:01 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:35.620 19:35:01 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:35.620 19:35:01 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:35.620 19:35:01 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:35.620 19:35:01 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:35.620 19:35:01 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:35.620 19:35:01 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:35.620 19:35:01 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:35.620 19:35:01 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.620 19:35:01 -- ftl/restore.sh@13 -- # mktemp -d 00:20:35.620 19:35:01 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.gp9vCqaA6q 00:20:35.620 19:35:01 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:35.620 19:35:01 -- ftl/restore.sh@16 -- # case $opt in 00:20:35.620 19:35:01 -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:35.620 19:35:01 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:35.620 19:35:01 -- ftl/restore.sh@23 -- # shift 2 00:20:35.620 19:35:01 -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:35.620 19:35:01 -- ftl/restore.sh@25 -- # timeout=240 00:20:35.620 19:35:01 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:35.620 19:35:01 -- ftl/restore.sh@39 -- # svcpid=80036 00:20:35.620 19:35:01 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.620 19:35:01 -- ftl/restore.sh@41 -- # waitforlisten 80036 00:20:35.620 19:35:01 -- common/autotest_common.sh@817 -- # '[' -z 80036 ']' 00:20:35.620 19:35:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.620 19:35:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:35.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.620 19:35:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.620 19:35:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:35.620 19:35:01 -- common/autotest_common.sh@10 -- # set +x 00:20:35.889 [2024-04-24 19:35:01.411374] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:35.889 [2024-04-24 19:35:01.411520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80036 ] 00:20:36.161 [2024-04-24 19:35:01.584869] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.161 [2024-04-24 19:35:01.832868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.535 19:35:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:37.535 19:35:02 -- common/autotest_common.sh@850 -- # return 0 00:20:37.535 19:35:02 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:37.535 19:35:02 -- ftl/common.sh@54 -- # local name=nvme0 00:20:37.536 19:35:02 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:37.536 19:35:02 -- ftl/common.sh@56 -- # local size=103424 00:20:37.536 19:35:02 -- ftl/common.sh@59 -- # local base_bdev 00:20:37.536 19:35:02 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:37.536 19:35:03 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:37.536 19:35:03 -- ftl/common.sh@62 -- # local base_size 00:20:37.536 19:35:03 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:37.536 19:35:03 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:20:37.536 19:35:03 -- common/autotest_common.sh@1365 -- # local bdev_info 00:20:37.536 19:35:03 -- common/autotest_common.sh@1366 -- # local bs 00:20:37.536 19:35:03 -- common/autotest_common.sh@1367 -- # local nb 00:20:37.536 19:35:03 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:37.794 19:35:03 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:20:37.794 { 00:20:37.794 "name": "nvme0n1", 00:20:37.794 "aliases": [ 00:20:37.794 "fcd610da-9b78-4588-a5fe-7119d07bc9f7" 00:20:37.794 ], 00:20:37.794 "product_name": "NVMe disk", 00:20:37.794 "block_size": 4096, 00:20:37.794 "num_blocks": 1310720, 00:20:37.794 "uuid": "fcd610da-9b78-4588-a5fe-7119d07bc9f7", 00:20:37.794 "assigned_rate_limits": { 00:20:37.794 "rw_ios_per_sec": 0, 00:20:37.794 "rw_mbytes_per_sec": 0, 00:20:37.794 "r_mbytes_per_sec": 0, 00:20:37.794 "w_mbytes_per_sec": 0 00:20:37.794 }, 00:20:37.794 "claimed": true, 00:20:37.794 "claim_type": "read_many_write_one", 00:20:37.794 "zoned": false, 00:20:37.794 "supported_io_types": { 00:20:37.794 "read": true, 00:20:37.794 "write": true, 00:20:37.794 "unmap": true, 00:20:37.794 "write_zeroes": true, 00:20:37.794 "flush": true, 00:20:37.794 "reset": true, 00:20:37.794 "compare": true, 00:20:37.794 "compare_and_write": false, 00:20:37.794 "abort": true, 00:20:37.794 "nvme_admin": true, 00:20:37.794 "nvme_io": true 00:20:37.794 }, 00:20:37.794 "driver_specific": { 00:20:37.794 "nvme": [ 00:20:37.794 { 00:20:37.794 "pci_address": "0000:00:11.0", 00:20:37.794 "trid": { 00:20:37.794 "trtype": "PCIe", 00:20:37.794 "traddr": "0000:00:11.0" 00:20:37.794 }, 00:20:37.794 "ctrlr_data": { 00:20:37.794 "cntlid": 0, 00:20:37.794 "vendor_id": "0x1b36", 00:20:37.794 "model_number": "QEMU NVMe Ctrl", 00:20:37.794 "serial_number": "12341", 00:20:37.794 "firmware_revision": "8.0.0", 00:20:37.794 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:37.794 "oacs": { 00:20:37.794 "security": 0, 00:20:37.794 "format": 1, 00:20:37.794 "firmware": 0, 00:20:37.794 "ns_manage": 1 00:20:37.794 }, 00:20:37.794 "multi_ctrlr": false, 00:20:37.794 "ana_reporting": false 00:20:37.794 }, 00:20:37.794 "vs": { 00:20:37.794 "nvme_version": "1.4" 00:20:37.794 }, 00:20:37.794 "ns_data": { 00:20:37.794 "id": 1, 00:20:37.794 "can_share": false 00:20:37.794 } 00:20:37.794 } 00:20:37.794 ], 00:20:37.794 "mp_policy": "active_passive" 00:20:37.794 } 00:20:37.794 } 00:20:37.794 ]' 00:20:37.794 19:35:03 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:20:37.794 19:35:03 -- common/autotest_common.sh@1369 -- # bs=4096 00:20:37.794 19:35:03 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:20:37.794 19:35:03 -- common/autotest_common.sh@1370 -- # nb=1310720 00:20:37.794 19:35:03 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:20:37.794 19:35:03 -- common/autotest_common.sh@1374 -- # echo 5120 00:20:37.794 19:35:03 -- ftl/common.sh@63 -- # base_size=5120 00:20:37.794 19:35:03 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:37.794 19:35:03 -- ftl/common.sh@67 -- # clear_lvols 00:20:37.794 19:35:03 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:37.794 19:35:03 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:38.053 19:35:03 -- ftl/common.sh@28 -- # stores=9fc7b2a3-b1cb-4972-8078-0496be1cc73c 00:20:38.053 19:35:03 -- ftl/common.sh@29 -- # for lvs in $stores 00:20:38.053 19:35:03 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9fc7b2a3-b1cb-4972-8078-0496be1cc73c 00:20:38.311 19:35:03 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:38.569 19:35:04 -- ftl/common.sh@68 -- # lvs=c2224149-2a40-437a-8bdd-29384eb29570 00:20:38.570 19:35:04 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c2224149-2a40-437a-8bdd-29384eb29570 00:20:38.829 19:35:04 -- ftl/restore.sh@43 -- # split_bdev=accc9911-da3d-45a8-81e9-66d8e0b094b7 00:20:38.829 19:35:04 -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:38.829 19:35:04 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 accc9911-da3d-45a8-81e9-66d8e0b094b7 00:20:38.829 19:35:04 -- ftl/common.sh@35 -- # local name=nvc0 00:20:38.829 19:35:04 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:38.829 19:35:04 -- ftl/common.sh@37 -- # local base_bdev=accc9911-da3d-45a8-81e9-66d8e0b094b7 00:20:38.829 19:35:04 -- ftl/common.sh@38 -- # local cache_size= 00:20:38.829 19:35:04 -- ftl/common.sh@41 -- # get_bdev_size accc9911-da3d-45a8-81e9-66d8e0b094b7 00:20:38.829 19:35:04 -- common/autotest_common.sh@1364 -- # local bdev_name=accc9911-da3d-45a8-81e9-66d8e0b094b7 00:20:38.829 19:35:04 -- common/autotest_common.sh@1365 -- # local bdev_info 00:20:38.829 19:35:04 -- common/autotest_common.sh@1366 -- # local bs 00:20:38.829 19:35:04 -- common/autotest_common.sh@1367 -- # local nb 00:20:38.829 19:35:04 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b accc9911-da3d-45a8-81e9-66d8e0b094b7 00:20:38.829 19:35:04 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:20:38.829 { 00:20:38.829 "name": "accc9911-da3d-45a8-81e9-66d8e0b094b7", 00:20:38.829 "aliases": [ 00:20:38.829 "lvs/nvme0n1p0" 00:20:38.829 ], 00:20:38.829 "product_name": "Logical Volume", 00:20:38.829 "block_size": 4096, 00:20:38.830 "num_blocks": 26476544, 00:20:38.830 "uuid": "accc9911-da3d-45a8-81e9-66d8e0b094b7", 00:20:38.830 "assigned_rate_limits": { 00:20:38.830 "rw_ios_per_sec": 0, 00:20:38.830 "rw_mbytes_per_sec": 0, 00:20:38.830 "r_mbytes_per_sec": 0, 00:20:38.830 "w_mbytes_per_sec": 0 00:20:38.830 }, 00:20:38.830 "claimed": false, 00:20:38.830 "zoned": false, 00:20:38.830 "supported_io_types": { 00:20:38.830 "read": true, 00:20:38.830 "write": true, 00:20:38.830 "unmap": true, 00:20:38.830 "write_zeroes": true, 00:20:38.830 "flush": false, 00:20:38.830 "reset": true, 00:20:38.830 "compare": false, 00:20:38.830 "compare_and_write": false, 00:20:38.830 "abort": false, 00:20:38.830 "nvme_admin": false, 00:20:38.830 "nvme_io": false 00:20:38.830 }, 00:20:38.830 "driver_specific": { 00:20:38.830 "lvol": { 00:20:38.830 "lvol_store_uuid": "c2224149-2a40-437a-8bdd-29384eb29570", 00:20:38.830 "base_bdev": "nvme0n1", 00:20:38.830 "thin_provision": true, 00:20:38.830 "snapshot": false, 00:20:38.830 "clone": false, 00:20:38.830 "esnap_clone": false 00:20:38.830 } 00:20:38.830 } 00:20:38.830 } 00:20:38.830 ]' 00:20:38.830 19:35:04 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:20:39.089 19:35:04 -- common/autotest_common.sh@1369 -- # bs=4096 00:20:39.089 19:35:04 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:20:39.089 19:35:04 -- common/autotest_common.sh@1370 -- # nb=26476544 00:20:39.089 19:35:04 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:20:39.089 19:35:04 -- common/autotest_common.sh@1374 -- # echo 103424 00:20:39.089 19:35:04 -- ftl/common.sh@41 -- # local base_size=5171 00:20:39.089 19:35:04 -- ftl/common.sh@44 -- # local nvc_bdev 00:20:39.089 19:35:04 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:39.349 19:35:04 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:39.349 19:35:04 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:39.349 19:35:04 -- ftl/common.sh@48 -- # get_bdev_size accc9911-da3d-45a8-81e9-66d8e0b094b7 00:20:39.349 19:35:04 -- common/autotest_common.sh@1364 -- # local bdev_name=accc9911-da3d-45a8-81e9-66d8e0b094b7 00:20:39.349 19:35:04 -- common/autotest_common.sh@1365 -- # local bdev_info 00:20:39.349 19:35:04 -- common/autotest_common.sh@1366 -- # local bs 00:20:39.349 19:35:04 -- common/autotest_common.sh@1367 -- # local nb 00:20:39.349 19:35:04 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b accc9911-da3d-45a8-81e9-66d8e0b094b7 00:20:39.609 19:35:05 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:20:39.609 { 00:20:39.609 "name": "accc9911-da3d-45a8-81e9-66d8e0b094b7", 00:20:39.609 "aliases": [ 00:20:39.609 "lvs/nvme0n1p0" 00:20:39.609 ], 00:20:39.609 "product_name": "Logical Volume", 00:20:39.609 "block_size": 4096, 00:20:39.609 "num_blocks": 26476544, 00:20:39.609 "uuid": "accc9911-da3d-45a8-81e9-66d8e0b094b7", 00:20:39.609 "assigned_rate_limits": { 00:20:39.609 "rw_ios_per_sec": 0, 00:20:39.609 "rw_mbytes_per_sec": 0, 00:20:39.609 "r_mbytes_per_sec": 0, 00:20:39.609 "w_mbytes_per_sec": 0 00:20:39.609 }, 00:20:39.609 "claimed": false, 00:20:39.609 "zoned": false, 00:20:39.609 "supported_io_types": { 00:20:39.609 "read": true, 00:20:39.609 "write": true, 00:20:39.609 "unmap": true, 00:20:39.609 "write_zeroes": true, 00:20:39.609 "flush": false, 00:20:39.609 "reset": true, 00:20:39.609 "compare": false, 00:20:39.609 "compare_and_write": false, 00:20:39.609 "abort": false, 00:20:39.609 "nvme_admin": false, 00:20:39.609 "nvme_io": false 00:20:39.609 }, 00:20:39.609 "driver_specific": { 00:20:39.609 "lvol": { 00:20:39.609 "lvol_store_uuid": "c2224149-2a40-437a-8bdd-29384eb29570", 00:20:39.609 "base_bdev": "nvme0n1", 00:20:39.609 "thin_provision": true, 00:20:39.609 "snapshot": false, 00:20:39.609 "clone": false, 00:20:39.609 "esnap_clone": false 00:20:39.609 } 00:20:39.609 } 00:20:39.609 } 00:20:39.609 ]' 00:20:39.609 19:35:05 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:20:39.609 19:35:05 -- common/autotest_common.sh@1369 -- # bs=4096 00:20:39.609 19:35:05 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:20:39.609 19:35:05 -- common/autotest_common.sh@1370 -- # nb=26476544 00:20:39.609 19:35:05 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:20:39.609 19:35:05 -- common/autotest_common.sh@1374 -- # echo 103424 00:20:39.609 19:35:05 -- ftl/common.sh@48 -- # cache_size=5171 00:20:39.609 19:35:05 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:39.869 19:35:05 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:39.869 19:35:05 -- ftl/restore.sh@48 -- # get_bdev_size accc9911-da3d-45a8-81e9-66d8e0b094b7 00:20:39.869 19:35:05 -- common/autotest_common.sh@1364 -- # local bdev_name=accc9911-da3d-45a8-81e9-66d8e0b094b7 00:20:39.869 19:35:05 -- common/autotest_common.sh@1365 -- # local bdev_info 00:20:39.869 19:35:05 -- common/autotest_common.sh@1366 -- # local bs 00:20:39.869 19:35:05 -- common/autotest_common.sh@1367 -- # local nb 00:20:39.869 19:35:05 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b accc9911-da3d-45a8-81e9-66d8e0b094b7 00:20:40.128 19:35:05 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:20:40.128 { 00:20:40.128 "name": "accc9911-da3d-45a8-81e9-66d8e0b094b7", 00:20:40.128 "aliases": [ 00:20:40.128 "lvs/nvme0n1p0" 00:20:40.128 ], 00:20:40.128 "product_name": "Logical Volume", 00:20:40.128 "block_size": 4096, 00:20:40.128 "num_blocks": 26476544, 00:20:40.128 "uuid": "accc9911-da3d-45a8-81e9-66d8e0b094b7", 00:20:40.128 "assigned_rate_limits": { 00:20:40.128 "rw_ios_per_sec": 0, 00:20:40.128 "rw_mbytes_per_sec": 0, 00:20:40.128 "r_mbytes_per_sec": 0, 00:20:40.128 "w_mbytes_per_sec": 0 00:20:40.128 }, 00:20:40.128 "claimed": false, 00:20:40.128 "zoned": false, 00:20:40.128 "supported_io_types": { 00:20:40.128 "read": true, 00:20:40.128 "write": true, 00:20:40.128 "unmap": true, 00:20:40.128 "write_zeroes": true, 00:20:40.128 "flush": false, 00:20:40.128 "reset": true, 00:20:40.128 "compare": false, 00:20:40.128 "compare_and_write": false, 00:20:40.128 "abort": false, 00:20:40.128 "nvme_admin": false, 00:20:40.128 "nvme_io": false 00:20:40.128 }, 00:20:40.128 "driver_specific": { 00:20:40.128 "lvol": { 00:20:40.128 "lvol_store_uuid": "c2224149-2a40-437a-8bdd-29384eb29570", 00:20:40.128 "base_bdev": "nvme0n1", 00:20:40.128 "thin_provision": true, 00:20:40.128 "snapshot": false, 00:20:40.128 "clone": false, 00:20:40.128 "esnap_clone": false 00:20:40.128 } 00:20:40.128 } 00:20:40.128 } 00:20:40.128 ]' 00:20:40.128 19:35:05 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:20:40.128 19:35:05 -- common/autotest_common.sh@1369 -- # bs=4096 00:20:40.128 19:35:05 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:20:40.388 19:35:05 -- common/autotest_common.sh@1370 -- # nb=26476544 00:20:40.388 19:35:05 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:20:40.388 19:35:05 -- common/autotest_common.sh@1374 -- # echo 103424 00:20:40.388 19:35:05 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:40.388 19:35:05 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d accc9911-da3d-45a8-81e9-66d8e0b094b7 --l2p_dram_limit 10' 00:20:40.388 19:35:05 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:40.388 19:35:05 -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:40.388 19:35:05 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:40.388 19:35:05 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:40.388 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:40.388 19:35:05 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d accc9911-da3d-45a8-81e9-66d8e0b094b7 --l2p_dram_limit 10 -c nvc0n1p0 00:20:40.388 [2024-04-24 19:35:05.987476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.388 [2024-04-24 19:35:05.987537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:40.388 [2024-04-24 19:35:05.987554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:40.388 [2024-04-24 19:35:05.987563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.388 [2024-04-24 19:35:05.987622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.388 [2024-04-24 19:35:05.987651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:40.388 [2024-04-24 19:35:05.987667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:40.388 [2024-04-24 19:35:05.987674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.388 [2024-04-24 19:35:05.987697] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:40.388 [2024-04-24 19:35:05.988961] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:40.388 [2024-04-24 19:35:05.988993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.388 [2024-04-24 19:35:05.989002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:40.388 [2024-04-24 19:35:05.989015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.304 ms 00:20:40.388 [2024-04-24 19:35:05.989026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.388 [2024-04-24 19:35:05.989061] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 23b7d3ce-0cda-4f08-8b35-d9af4c3def74 00:20:40.388 [2024-04-24 19:35:05.990504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.389 [2024-04-24 19:35:05.990536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:40.389 [2024-04-24 19:35:05.990546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:40.389 [2024-04-24 19:35:05.990557] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.389 [2024-04-24 19:35:05.998002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.389 [2024-04-24 19:35:05.998037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:40.389 [2024-04-24 19:35:05.998047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.415 ms 00:20:40.389 [2024-04-24 19:35:05.998056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.389 [2024-04-24 19:35:05.998208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.389 [2024-04-24 19:35:05.998227] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:40.389 [2024-04-24 19:35:05.998237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:20:40.389 [2024-04-24 19:35:05.998247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.389 [2024-04-24 19:35:05.998322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.389 [2024-04-24 19:35:05.998337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:40.389 [2024-04-24 19:35:05.998345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:40.389 [2024-04-24 19:35:05.998355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.389 [2024-04-24 19:35:05.998381] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:40.389 [2024-04-24 19:35:06.004159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.389 [2024-04-24 19:35:06.004195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:40.389 [2024-04-24 19:35:06.004206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.795 ms 00:20:40.389 [2024-04-24 19:35:06.004215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.389 [2024-04-24 19:35:06.004248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.389 [2024-04-24 19:35:06.004257] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:40.389 [2024-04-24 19:35:06.004268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:40.389 [2024-04-24 19:35:06.004276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.389 [2024-04-24 19:35:06.004311] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:40.389 [2024-04-24 19:35:06.004419] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:40.389 [2024-04-24 19:35:06.004460] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:40.389 [2024-04-24 19:35:06.004471] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:40.389 [2024-04-24 19:35:06.004485] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:40.389 [2024-04-24 19:35:06.004496] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:40.389 [2024-04-24 19:35:06.004506] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:40.389 [2024-04-24 19:35:06.004513] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:40.389 [2024-04-24 19:35:06.004522] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:40.389 [2024-04-24 19:35:06.004530] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:40.389 [2024-04-24 19:35:06.004554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.389 [2024-04-24 19:35:06.004562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:40.389 [2024-04-24 19:35:06.004572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:20:40.389 [2024-04-24 19:35:06.004581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.389 [2024-04-24 19:35:06.004659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.389 [2024-04-24 19:35:06.004669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:40.389 [2024-04-24 19:35:06.004681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:40.389 [2024-04-24 19:35:06.004689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.389 [2024-04-24 19:35:06.004755] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:40.389 [2024-04-24 19:35:06.004766] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:40.389 [2024-04-24 19:35:06.004778] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.389 [2024-04-24 19:35:06.004787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.389 [2024-04-24 19:35:06.004797] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:40.389 [2024-04-24 19:35:06.004804] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:40.389 [2024-04-24 19:35:06.004813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:40.389 [2024-04-24 19:35:06.004822] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:40.389 [2024-04-24 19:35:06.004831] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:40.389 [2024-04-24 19:35:06.004838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.389 [2024-04-24 19:35:06.004847] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:40.389 [2024-04-24 19:35:06.004855] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:40.389 [2024-04-24 19:35:06.004863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:40.389 [2024-04-24 19:35:06.004870] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:40.389 [2024-04-24 19:35:06.004878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:40.389 [2024-04-24 19:35:06.004884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.389 [2024-04-24 19:35:06.004894] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:40.389 [2024-04-24 19:35:06.004901] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:40.389 [2024-04-24 19:35:06.004911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.389 [2024-04-24 19:35:06.004918] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:40.389 [2024-04-24 19:35:06.004928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:40.389 [2024-04-24 19:35:06.004934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:40.389 [2024-04-24 19:35:06.004942] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:40.389 [2024-04-24 19:35:06.004949] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:40.389 [2024-04-24 19:35:06.004957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:40.389 [2024-04-24 19:35:06.004965] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:40.389 [2024-04-24 19:35:06.004973] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:40.389 [2024-04-24 19:35:06.004979] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:40.389 [2024-04-24 19:35:06.004987] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:40.389 [2024-04-24 19:35:06.004993] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:40.389 [2024-04-24 19:35:06.005000] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:40.389 [2024-04-24 19:35:06.005007] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:40.389 [2024-04-24 19:35:06.005014] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:40.389 [2024-04-24 19:35:06.005020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:40.389 [2024-04-24 19:35:06.005030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:40.389 [2024-04-24 19:35:06.005038] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:40.389 [2024-04-24 19:35:06.005046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.389 [2024-04-24 19:35:06.005052] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:40.389 [2024-04-24 19:35:06.005060] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:40.389 [2024-04-24 19:35:06.005067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:40.389 [2024-04-24 19:35:06.005075] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:40.389 [2024-04-24 19:35:06.005082] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:40.389 [2024-04-24 19:35:06.005091] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:40.389 [2024-04-24 19:35:06.005100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:40.389 [2024-04-24 19:35:06.005109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:40.389 [2024-04-24 19:35:06.005116] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:40.389 [2024-04-24 19:35:06.005125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:40.389 [2024-04-24 19:35:06.005132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:40.389 [2024-04-24 19:35:06.005140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:40.389 [2024-04-24 19:35:06.005147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:40.389 [2024-04-24 19:35:06.005159] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:40.389 [2024-04-24 19:35:06.005169] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.389 [2024-04-24 19:35:06.005180] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:40.389 [2024-04-24 19:35:06.005187] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:40.389 [2024-04-24 19:35:06.005196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:40.389 [2024-04-24 19:35:06.005203] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:40.389 [2024-04-24 19:35:06.005211] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:40.389 [2024-04-24 19:35:06.005219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:40.389 [2024-04-24 19:35:06.005227] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:40.389 [2024-04-24 19:35:06.005234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:40.389 [2024-04-24 19:35:06.005244] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:40.390 [2024-04-24 19:35:06.005254] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:40.390 [2024-04-24 19:35:06.005262] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:40.390 [2024-04-24 19:35:06.005269] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:40.390 [2024-04-24 19:35:06.005277] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:40.390 [2024-04-24 19:35:06.005284] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:40.390 [2024-04-24 19:35:06.005296] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:40.390 [2024-04-24 19:35:06.005305] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:40.390 [2024-04-24 19:35:06.005314] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:40.390 [2024-04-24 19:35:06.005321] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:40.390 [2024-04-24 19:35:06.005329] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:40.390 [2024-04-24 19:35:06.005336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.390 [2024-04-24 19:35:06.005347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:40.390 [2024-04-24 19:35:06.005355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:20:40.390 [2024-04-24 19:35:06.005364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.390 [2024-04-24 19:35:06.030305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.390 [2024-04-24 19:35:06.030344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:40.390 [2024-04-24 19:35:06.030355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.950 ms 00:20:40.390 [2024-04-24 19:35:06.030364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.390 [2024-04-24 19:35:06.030448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.390 [2024-04-24 19:35:06.030460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:40.390 [2024-04-24 19:35:06.030469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:40.390 [2024-04-24 19:35:06.030480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.650 [2024-04-24 19:35:06.083691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.650 [2024-04-24 19:35:06.083737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:40.650 [2024-04-24 19:35:06.083748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.260 ms 00:20:40.650 [2024-04-24 19:35:06.083757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.650 [2024-04-24 19:35:06.083797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.650 [2024-04-24 19:35:06.083807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:40.650 [2024-04-24 19:35:06.083816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:40.650 [2024-04-24 19:35:06.083827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.650 [2024-04-24 19:35:06.084310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.650 [2024-04-24 19:35:06.084338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:40.650 [2024-04-24 19:35:06.084347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:20:40.650 [2024-04-24 19:35:06.084356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.650 [2024-04-24 19:35:06.084455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.650 [2024-04-24 19:35:06.084475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:40.650 [2024-04-24 19:35:06.084483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:20:40.650 [2024-04-24 19:35:06.084495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.650 [2024-04-24 19:35:06.108807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.650 [2024-04-24 19:35:06.108853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:40.650 [2024-04-24 19:35:06.108866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.336 ms 00:20:40.650 [2024-04-24 19:35:06.108879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.650 [2024-04-24 19:35:06.123580] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:40.650 [2024-04-24 19:35:06.126791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.650 [2024-04-24 19:35:06.126827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:40.650 [2024-04-24 19:35:06.126841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.835 ms 00:20:40.650 [2024-04-24 19:35:06.126849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.650 [2024-04-24 19:35:06.222933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:40.650 [2024-04-24 19:35:06.222993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:40.650 [2024-04-24 19:35:06.223009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.224 ms 00:20:40.650 [2024-04-24 19:35:06.223018] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:40.650 [2024-04-24 19:35:06.223072] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:20:40.650 [2024-04-24 19:35:06.223084] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:20:43.940 [2024-04-24 19:35:09.417077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.941 [2024-04-24 19:35:09.417148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:43.941 [2024-04-24 19:35:09.417166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3200.150 ms 00:20:43.941 [2024-04-24 19:35:09.417174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.941 [2024-04-24 19:35:09.417419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.941 [2024-04-24 19:35:09.417445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:43.941 [2024-04-24 19:35:09.417458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:20:43.941 [2024-04-24 19:35:09.417466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.941 [2024-04-24 19:35:09.458873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.941 [2024-04-24 19:35:09.458940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:43.941 [2024-04-24 19:35:09.458959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.399 ms 00:20:43.941 [2024-04-24 19:35:09.458978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.941 [2024-04-24 19:35:09.502945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.941 [2024-04-24 19:35:09.503028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:43.941 [2024-04-24 19:35:09.503048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.949 ms 00:20:43.941 [2024-04-24 19:35:09.503057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.941 [2024-04-24 19:35:09.503610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.941 [2024-04-24 19:35:09.503650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:43.941 [2024-04-24 19:35:09.503664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:20:43.941 [2024-04-24 19:35:09.503675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:43.941 [2024-04-24 19:35:09.607851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:43.941 [2024-04-24 19:35:09.607937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:43.941 [2024-04-24 19:35:09.607957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.290 ms 00:20:43.941 [2024-04-24 19:35:09.607967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.199 [2024-04-24 19:35:09.651081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.199 [2024-04-24 19:35:09.651139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:44.199 [2024-04-24 19:35:09.651155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.096 ms 00:20:44.199 [2024-04-24 19:35:09.651162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.199 [2024-04-24 19:35:09.652954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.199 [2024-04-24 19:35:09.652996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:44.199 [2024-04-24 19:35:09.653008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:20:44.199 [2024-04-24 19:35:09.653016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.199 [2024-04-24 19:35:09.690273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.199 [2024-04-24 19:35:09.690318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:44.199 [2024-04-24 19:35:09.690333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.266 ms 00:20:44.199 [2024-04-24 19:35:09.690341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.199 [2024-04-24 19:35:09.690388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.199 [2024-04-24 19:35:09.690398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:44.199 [2024-04-24 19:35:09.690407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:44.199 [2024-04-24 19:35:09.690417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.199 [2024-04-24 19:35:09.690512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.199 [2024-04-24 19:35:09.690524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:44.199 [2024-04-24 19:35:09.690533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:20:44.199 [2024-04-24 19:35:09.690541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.199 [2024-04-24 19:35:09.691676] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3710.830 ms, result 0 00:20:44.199 { 00:20:44.199 "name": "ftl0", 00:20:44.199 "uuid": "23b7d3ce-0cda-4f08-8b35-d9af4c3def74" 00:20:44.199 } 00:20:44.199 19:35:09 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:44.199 19:35:09 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:44.458 19:35:09 -- ftl/restore.sh@63 -- # echo ']}' 00:20:44.458 19:35:09 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:44.716 [2024-04-24 19:35:10.178082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.716 [2024-04-24 19:35:10.178145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:44.716 [2024-04-24 19:35:10.178158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:44.716 [2024-04-24 19:35:10.178171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.716 [2024-04-24 19:35:10.178195] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:44.716 [2024-04-24 19:35:10.181896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.716 [2024-04-24 19:35:10.181928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:44.716 [2024-04-24 19:35:10.181944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.689 ms 00:20:44.716 [2024-04-24 19:35:10.181952] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.716 [2024-04-24 19:35:10.182206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.716 [2024-04-24 19:35:10.182230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:44.716 [2024-04-24 19:35:10.182257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:20:44.716 [2024-04-24 19:35:10.182265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.716 [2024-04-24 19:35:10.184814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.716 [2024-04-24 19:35:10.184836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:44.716 [2024-04-24 19:35:10.184846] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:20:44.716 [2024-04-24 19:35:10.184854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.716 [2024-04-24 19:35:10.189770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.716 [2024-04-24 19:35:10.189799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:44.716 [2024-04-24 19:35:10.189968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.904 ms 00:20:44.716 [2024-04-24 19:35:10.189983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.716 [2024-04-24 19:35:10.227737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.716 [2024-04-24 19:35:10.227784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:44.716 [2024-04-24 19:35:10.227799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.754 ms 00:20:44.716 [2024-04-24 19:35:10.227808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.716 [2024-04-24 19:35:10.251033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.716 [2024-04-24 19:35:10.251081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:44.716 [2024-04-24 19:35:10.251096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.204 ms 00:20:44.716 [2024-04-24 19:35:10.251104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.716 [2024-04-24 19:35:10.251275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.716 [2024-04-24 19:35:10.251288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:44.716 [2024-04-24 19:35:10.251301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:20:44.717 [2024-04-24 19:35:10.251309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.717 [2024-04-24 19:35:10.290007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.717 [2024-04-24 19:35:10.290053] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:44.717 [2024-04-24 19:35:10.290071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.749 ms 00:20:44.717 [2024-04-24 19:35:10.290079] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.717 [2024-04-24 19:35:10.328827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.717 [2024-04-24 19:35:10.328873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:44.717 [2024-04-24 19:35:10.328887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.769 ms 00:20:44.717 [2024-04-24 19:35:10.328895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.717 [2024-04-24 19:35:10.366482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.717 [2024-04-24 19:35:10.366533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:44.717 [2024-04-24 19:35:10.366547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.599 ms 00:20:44.717 [2024-04-24 19:35:10.366555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.977 [2024-04-24 19:35:10.405072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.977 [2024-04-24 19:35:10.405122] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:44.977 [2024-04-24 19:35:10.405137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.460 ms 00:20:44.977 [2024-04-24 19:35:10.405144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.977 [2024-04-24 19:35:10.405202] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:44.977 [2024-04-24 19:35:10.405218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:44.977 [2024-04-24 19:35:10.405848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.405997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:44.978 [2024-04-24 19:35:10.406145] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:44.978 [2024-04-24 19:35:10.406154] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 23b7d3ce-0cda-4f08-8b35-d9af4c3def74 00:20:44.978 [2024-04-24 19:35:10.406162] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:44.978 [2024-04-24 19:35:10.406172] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:44.978 [2024-04-24 19:35:10.406181] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:44.978 [2024-04-24 19:35:10.406192] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:44.978 [2024-04-24 19:35:10.406200] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:44.978 [2024-04-24 19:35:10.406209] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:44.978 [2024-04-24 19:35:10.406217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:44.978 [2024-04-24 19:35:10.406225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:44.978 [2024-04-24 19:35:10.406232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:44.978 [2024-04-24 19:35:10.406241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.978 [2024-04-24 19:35:10.406252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:44.978 [2024-04-24 19:35:10.406264] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:20:44.978 [2024-04-24 19:35:10.406271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.978 [2024-04-24 19:35:10.425915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.978 [2024-04-24 19:35:10.425961] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:44.978 [2024-04-24 19:35:10.425974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.625 ms 00:20:44.978 [2024-04-24 19:35:10.425981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.978 [2024-04-24 19:35:10.426225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.978 [2024-04-24 19:35:10.426236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:44.978 [2024-04-24 19:35:10.426246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:20:44.978 [2024-04-24 19:35:10.426253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.978 [2024-04-24 19:35:10.495578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.978 [2024-04-24 19:35:10.495637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:44.978 [2024-04-24 19:35:10.495653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.978 [2024-04-24 19:35:10.495661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.978 [2024-04-24 19:35:10.495744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.978 [2024-04-24 19:35:10.495754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:44.978 [2024-04-24 19:35:10.495766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.978 [2024-04-24 19:35:10.495777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.978 [2024-04-24 19:35:10.495878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.978 [2024-04-24 19:35:10.495894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:44.978 [2024-04-24 19:35:10.495904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.978 [2024-04-24 19:35:10.495912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.978 [2024-04-24 19:35:10.495934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.978 [2024-04-24 19:35:10.495944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:44.978 [2024-04-24 19:35:10.495954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.978 [2024-04-24 19:35:10.495962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.978 [2024-04-24 19:35:10.615098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.978 [2024-04-24 19:35:10.615152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:44.978 [2024-04-24 19:35:10.615166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.978 [2024-04-24 19:35:10.615174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.238 [2024-04-24 19:35:10.662694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.238 [2024-04-24 19:35:10.662746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:45.238 [2024-04-24 19:35:10.662763] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.238 [2024-04-24 19:35:10.662772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.238 [2024-04-24 19:35:10.662865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.238 [2024-04-24 19:35:10.662875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:45.238 [2024-04-24 19:35:10.662885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.238 [2024-04-24 19:35:10.662896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.238 [2024-04-24 19:35:10.662938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.238 [2024-04-24 19:35:10.662947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:45.238 [2024-04-24 19:35:10.662960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.238 [2024-04-24 19:35:10.662967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.238 [2024-04-24 19:35:10.663075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.238 [2024-04-24 19:35:10.663088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:45.238 [2024-04-24 19:35:10.663098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.238 [2024-04-24 19:35:10.663106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.238 [2024-04-24 19:35:10.663162] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.238 [2024-04-24 19:35:10.663180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:45.238 [2024-04-24 19:35:10.663194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.238 [2024-04-24 19:35:10.663217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.238 [2024-04-24 19:35:10.663260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.238 [2024-04-24 19:35:10.663273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:45.238 [2024-04-24 19:35:10.663285] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.238 [2024-04-24 19:35:10.663294] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.238 [2024-04-24 19:35:10.663343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.238 [2024-04-24 19:35:10.663359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:45.238 [2024-04-24 19:35:10.663373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.238 [2024-04-24 19:35:10.663382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.238 [2024-04-24 19:35:10.663522] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 486.337 ms, result 0 00:20:45.238 true 00:20:45.238 19:35:10 -- ftl/restore.sh@66 -- # killprocess 80036 00:20:45.238 19:35:10 -- common/autotest_common.sh@936 -- # '[' -z 80036 ']' 00:20:45.238 19:35:10 -- common/autotest_common.sh@940 -- # kill -0 80036 00:20:45.238 19:35:10 -- common/autotest_common.sh@941 -- # uname 00:20:45.238 19:35:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:45.238 19:35:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80036 00:20:45.238 19:35:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:45.238 19:35:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:45.238 killing process with pid 80036 00:20:45.238 19:35:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80036' 00:20:45.238 19:35:10 -- common/autotest_common.sh@955 -- # kill 80036 00:20:45.238 19:35:10 -- common/autotest_common.sh@960 -- # wait 80036 00:20:53.386 19:35:17 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:55.918 262144+0 records in 00:20:55.918 262144+0 records out 00:20:55.918 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.75266 s, 286 MB/s 00:20:55.918 19:35:21 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:57.827 19:35:23 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:57.827 [2024-04-24 19:35:23.207324] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:20:57.827 [2024-04-24 19:35:23.207454] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80319 ] 00:20:57.827 [2024-04-24 19:35:23.363829] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.087 [2024-04-24 19:35:23.642038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.687 [2024-04-24 19:35:24.120804] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:58.687 [2024-04-24 19:35:24.120877] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:58.687 [2024-04-24 19:35:24.277537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.687 [2024-04-24 19:35:24.277598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:58.687 [2024-04-24 19:35:24.277616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:58.687 [2024-04-24 19:35:24.277627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.687 [2024-04-24 19:35:24.277726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.687 [2024-04-24 19:35:24.277742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.687 [2024-04-24 19:35:24.277753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:58.687 [2024-04-24 19:35:24.277762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.687 [2024-04-24 19:35:24.277791] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:58.687 [2024-04-24 19:35:24.278934] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:58.687 [2024-04-24 19:35:24.278974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.687 [2024-04-24 19:35:24.278988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.687 [2024-04-24 19:35:24.279001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.191 ms 00:20:58.687 [2024-04-24 19:35:24.279012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.687 [2024-04-24 19:35:24.280627] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:58.687 [2024-04-24 19:35:24.302075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.687 [2024-04-24 19:35:24.302119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:58.687 [2024-04-24 19:35:24.302145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.490 ms 00:20:58.687 [2024-04-24 19:35:24.302155] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.687 [2024-04-24 19:35:24.302244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.687 [2024-04-24 19:35:24.302261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:58.687 [2024-04-24 19:35:24.302276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:58.687 [2024-04-24 19:35:24.302286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.687 [2024-04-24 19:35:24.309624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.687 [2024-04-24 19:35:24.309672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.687 [2024-04-24 19:35:24.309718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.260 ms 00:20:58.687 [2024-04-24 19:35:24.309729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.687 [2024-04-24 19:35:24.309847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.687 [2024-04-24 19:35:24.309864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.687 [2024-04-24 19:35:24.309878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:58.687 [2024-04-24 19:35:24.309889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.687 [2024-04-24 19:35:24.309952] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.687 [2024-04-24 19:35:24.309970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:58.687 [2024-04-24 19:35:24.309982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:58.687 [2024-04-24 19:35:24.309993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.687 [2024-04-24 19:35:24.310028] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:58.687 [2024-04-24 19:35:24.316092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.687 [2024-04-24 19:35:24.316127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.687 [2024-04-24 19:35:24.316142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.086 ms 00:20:58.687 [2024-04-24 19:35:24.316153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.687 [2024-04-24 19:35:24.316195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.687 [2024-04-24 19:35:24.316208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:58.687 [2024-04-24 19:35:24.316221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:58.687 [2024-04-24 19:35:24.316233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.687 [2024-04-24 19:35:24.316297] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:58.687 [2024-04-24 19:35:24.316333] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:58.687 [2024-04-24 19:35:24.316378] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:58.687 [2024-04-24 19:35:24.316412] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:58.687 [2024-04-24 19:35:24.316507] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:58.687 [2024-04-24 19:35:24.316537] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:58.687 [2024-04-24 19:35:24.316565] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:58.687 [2024-04-24 19:35:24.316581] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:58.687 [2024-04-24 19:35:24.316598] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:58.687 [2024-04-24 19:35:24.316616] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:58.687 [2024-04-24 19:35:24.316629] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:58.688 [2024-04-24 19:35:24.316657] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:58.688 [2024-04-24 19:35:24.316669] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:58.688 [2024-04-24 19:35:24.316681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.688 [2024-04-24 19:35:24.316694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:58.688 [2024-04-24 19:35:24.316707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:20:58.688 [2024-04-24 19:35:24.316719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.688 [2024-04-24 19:35:24.316803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.688 [2024-04-24 19:35:24.316819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:58.688 [2024-04-24 19:35:24.316836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:58.688 [2024-04-24 19:35:24.316848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.688 [2024-04-24 19:35:24.316936] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:58.688 [2024-04-24 19:35:24.316958] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:58.688 [2024-04-24 19:35:24.316973] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.688 [2024-04-24 19:35:24.316985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.688 [2024-04-24 19:35:24.316998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:58.688 [2024-04-24 19:35:24.317010] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:58.688 [2024-04-24 19:35:24.317022] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:58.688 [2024-04-24 19:35:24.317034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:58.688 [2024-04-24 19:35:24.317046] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:58.688 [2024-04-24 19:35:24.317058] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.688 [2024-04-24 19:35:24.317070] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:58.688 [2024-04-24 19:35:24.317082] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:58.688 [2024-04-24 19:35:24.317107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.688 [2024-04-24 19:35:24.317119] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:58.688 [2024-04-24 19:35:24.317130] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:58.688 [2024-04-24 19:35:24.317143] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.688 [2024-04-24 19:35:24.317154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:58.688 [2024-04-24 19:35:24.317165] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:58.688 [2024-04-24 19:35:24.317177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.688 [2024-04-24 19:35:24.317188] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:58.688 [2024-04-24 19:35:24.317200] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:58.688 [2024-04-24 19:35:24.317212] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:58.688 [2024-04-24 19:35:24.317223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:58.688 [2024-04-24 19:35:24.317235] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:58.688 [2024-04-24 19:35:24.317247] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:58.688 [2024-04-24 19:35:24.317259] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:58.688 [2024-04-24 19:35:24.317271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:58.688 [2024-04-24 19:35:24.317283] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:58.688 [2024-04-24 19:35:24.317296] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:58.688 [2024-04-24 19:35:24.317310] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:58.688 [2024-04-24 19:35:24.317321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:58.688 [2024-04-24 19:35:24.317334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:58.688 [2024-04-24 19:35:24.317346] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:58.688 [2024-04-24 19:35:24.317358] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:58.688 [2024-04-24 19:35:24.317370] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:58.688 [2024-04-24 19:35:24.317381] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:58.688 [2024-04-24 19:35:24.317393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.688 [2024-04-24 19:35:24.317404] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:58.688 [2024-04-24 19:35:24.317416] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:58.688 [2024-04-24 19:35:24.317428] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.688 [2024-04-24 19:35:24.317438] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:58.688 [2024-04-24 19:35:24.317452] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:58.688 [2024-04-24 19:35:24.317473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.688 [2024-04-24 19:35:24.317489] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.688 [2024-04-24 19:35:24.317501] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:58.688 [2024-04-24 19:35:24.317513] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:58.688 [2024-04-24 19:35:24.317525] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:58.688 [2024-04-24 19:35:24.317537] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:58.688 [2024-04-24 19:35:24.317549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:58.688 [2024-04-24 19:35:24.317561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:58.688 [2024-04-24 19:35:24.317575] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:58.688 [2024-04-24 19:35:24.317590] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.688 [2024-04-24 19:35:24.317605] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:58.688 [2024-04-24 19:35:24.317618] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:58.688 [2024-04-24 19:35:24.317642] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:58.688 [2024-04-24 19:35:24.317656] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:58.688 [2024-04-24 19:35:24.317670] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:58.688 [2024-04-24 19:35:24.317682] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:58.688 [2024-04-24 19:35:24.317696] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:58.688 [2024-04-24 19:35:24.317708] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:58.688 [2024-04-24 19:35:24.317721] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:58.688 [2024-04-24 19:35:24.317736] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:58.688 [2024-04-24 19:35:24.317749] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:58.688 [2024-04-24 19:35:24.317762] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:58.688 [2024-04-24 19:35:24.317776] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:58.688 [2024-04-24 19:35:24.317788] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:58.688 [2024-04-24 19:35:24.317802] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.688 [2024-04-24 19:35:24.317817] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:58.688 [2024-04-24 19:35:24.317830] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:58.688 [2024-04-24 19:35:24.317843] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:58.688 [2024-04-24 19:35:24.317857] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:58.688 [2024-04-24 19:35:24.317871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.688 [2024-04-24 19:35:24.317884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:58.688 [2024-04-24 19:35:24.317898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:20:58.688 [2024-04-24 19:35:24.317911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.688 [2024-04-24 19:35:24.344907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.688 [2024-04-24 19:35:24.344949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:58.688 [2024-04-24 19:35:24.344982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.985 ms 00:20:58.688 [2024-04-24 19:35:24.344993] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.688 [2024-04-24 19:35:24.345105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.688 [2024-04-24 19:35:24.345127] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:58.688 [2024-04-24 19:35:24.345141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:58.688 [2024-04-24 19:35:24.345154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.949 [2024-04-24 19:35:24.416079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.949 [2024-04-24 19:35:24.416154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:58.949 [2024-04-24 19:35:24.416173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.980 ms 00:20:58.949 [2024-04-24 19:35:24.416189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.949 [2024-04-24 19:35:24.416266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.949 [2024-04-24 19:35:24.416278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.949 [2024-04-24 19:35:24.416291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:58.949 [2024-04-24 19:35:24.416301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.949 [2024-04-24 19:35:24.416822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.949 [2024-04-24 19:35:24.416850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.949 [2024-04-24 19:35:24.416862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:20:58.949 [2024-04-24 19:35:24.416873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.949 [2024-04-24 19:35:24.417018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.949 [2024-04-24 19:35:24.417049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.949 [2024-04-24 19:35:24.417062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:20:58.949 [2024-04-24 19:35:24.417072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.949 [2024-04-24 19:35:24.441929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.949 [2024-04-24 19:35:24.441983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.949 [2024-04-24 19:35:24.442000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.874 ms 00:20:58.949 [2024-04-24 19:35:24.442011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.949 [2024-04-24 19:35:24.464569] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:58.949 [2024-04-24 19:35:24.464617] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:58.949 [2024-04-24 19:35:24.464641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.949 [2024-04-24 19:35:24.464653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:58.949 [2024-04-24 19:35:24.464666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.504 ms 00:20:58.949 [2024-04-24 19:35:24.464675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.949 [2024-04-24 19:35:24.499824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.949 [2024-04-24 19:35:24.499888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:58.949 [2024-04-24 19:35:24.499906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.156 ms 00:20:58.949 [2024-04-24 19:35:24.499918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.949 [2024-04-24 19:35:24.521689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.949 [2024-04-24 19:35:24.521737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:58.949 [2024-04-24 19:35:24.521753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.733 ms 00:20:58.949 [2024-04-24 19:35:24.521763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.949 [2024-04-24 19:35:24.544040] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.949 [2024-04-24 19:35:24.544090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:58.949 [2024-04-24 19:35:24.544108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.246 ms 00:20:58.949 [2024-04-24 19:35:24.544119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.949 [2024-04-24 19:35:24.544709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.949 [2024-04-24 19:35:24.544744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:58.949 [2024-04-24 19:35:24.544760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.440 ms 00:20:58.949 [2024-04-24 19:35:24.544772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.208 [2024-04-24 19:35:24.644474] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.208 [2024-04-24 19:35:24.644548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:59.208 [2024-04-24 19:35:24.644569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.865 ms 00:20:59.208 [2024-04-24 19:35:24.644582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.208 [2024-04-24 19:35:24.661801] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:59.208 [2024-04-24 19:35:24.665407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.208 [2024-04-24 19:35:24.665450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:59.208 [2024-04-24 19:35:24.665467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.744 ms 00:20:59.208 [2024-04-24 19:35:24.665480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.208 [2024-04-24 19:35:24.665612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.208 [2024-04-24 19:35:24.665641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:59.208 [2024-04-24 19:35:24.665666] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:59.208 [2024-04-24 19:35:24.665678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.208 [2024-04-24 19:35:24.665794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.208 [2024-04-24 19:35:24.665819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:59.208 [2024-04-24 19:35:24.665835] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:59.208 [2024-04-24 19:35:24.665848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.208 [2024-04-24 19:35:24.667796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.208 [2024-04-24 19:35:24.667834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:59.208 [2024-04-24 19:35:24.667849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.918 ms 00:20:59.208 [2024-04-24 19:35:24.667865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.208 [2024-04-24 19:35:24.667912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.208 [2024-04-24 19:35:24.667928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:59.208 [2024-04-24 19:35:24.667940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:59.208 [2024-04-24 19:35:24.667953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.208 [2024-04-24 19:35:24.667999] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:59.208 [2024-04-24 19:35:24.668017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.208 [2024-04-24 19:35:24.668030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:59.208 [2024-04-24 19:35:24.668044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:59.208 [2024-04-24 19:35:24.668057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.208 [2024-04-24 19:35:24.716203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.208 [2024-04-24 19:35:24.716268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:59.208 [2024-04-24 19:35:24.716288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.204 ms 00:20:59.208 [2024-04-24 19:35:24.716301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.208 [2024-04-24 19:35:24.716440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.208 [2024-04-24 19:35:24.716455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:59.208 [2024-04-24 19:35:24.716468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:59.208 [2024-04-24 19:35:24.716486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.208 [2024-04-24 19:35:24.717855] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 440.666 ms, result 0 00:21:29.682  Copying: 37/1024 [MB] (37 MBps) Copying: 71/1024 [MB] (34 MBps) Copying: 95/1024 [MB] (23 MBps) Copying: 126/1024 [MB] (31 MBps) Copying: 159/1024 [MB] (32 MBps) Copying: 192/1024 [MB] (33 MBps) Copying: 225/1024 [MB] (32 MBps) Copying: 259/1024 [MB] (33 MBps) Copying: 293/1024 [MB] (33 MBps) Copying: 327/1024 [MB] (33 MBps) Copying: 361/1024 [MB] (34 MBps) Copying: 395/1024 [MB] (33 MBps) Copying: 428/1024 [MB] (33 MBps) Copying: 462/1024 [MB] (34 MBps) Copying: 498/1024 [MB] (35 MBps) Copying: 531/1024 [MB] (33 MBps) Copying: 566/1024 [MB] (34 MBps) Copying: 600/1024 [MB] (33 MBps) Copying: 634/1024 [MB] (33 MBps) Copying: 668/1024 [MB] (34 MBps) Copying: 702/1024 [MB] (34 MBps) Copying: 735/1024 [MB] (32 MBps) Copying: 769/1024 [MB] (33 MBps) Copying: 802/1024 [MB] (33 MBps) Copying: 835/1024 [MB] (32 MBps) Copying: 868/1024 [MB] (33 MBps) Copying: 901/1024 [MB] (33 MBps) Copying: 935/1024 [MB] (33 MBps) Copying: 968/1024 [MB] (32 MBps) Copying: 1002/1024 [MB] (34 MBps) Copying: 1024/1024 [MB] (average 33 MBps)[2024-04-24 19:35:55.338112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.682 [2024-04-24 19:35:55.338183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:29.682 [2024-04-24 19:35:55.338218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:29.682 [2024-04-24 19:35:55.338228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.682 [2024-04-24 19:35:55.338249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:29.682 [2024-04-24 19:35:55.342254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.682 [2024-04-24 19:35:55.342288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:29.682 [2024-04-24 19:35:55.342298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.997 ms 00:21:29.682 [2024-04-24 19:35:55.342306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.682 [2024-04-24 19:35:55.344246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.682 [2024-04-24 19:35:55.344285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:29.682 [2024-04-24 19:35:55.344303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.922 ms 00:21:29.682 [2024-04-24 19:35:55.344312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.948 [2024-04-24 19:35:55.361783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.948 [2024-04-24 19:35:55.361825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:29.948 [2024-04-24 19:35:55.361837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.485 ms 00:21:29.948 [2024-04-24 19:35:55.361845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.948 [2024-04-24 19:35:55.367188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.948 [2024-04-24 19:35:55.367235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:29.948 [2024-04-24 19:35:55.367256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.320 ms 00:21:29.948 [2024-04-24 19:35:55.367278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.948 [2024-04-24 19:35:55.406293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.948 [2024-04-24 19:35:55.406357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:29.948 [2024-04-24 19:35:55.406374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.032 ms 00:21:29.948 [2024-04-24 19:35:55.406401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.948 [2024-04-24 19:35:55.429291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.948 [2024-04-24 19:35:55.429339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:29.948 [2024-04-24 19:35:55.429353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.877 ms 00:21:29.948 [2024-04-24 19:35:55.429360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.948 [2024-04-24 19:35:55.429496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.948 [2024-04-24 19:35:55.429507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:29.948 [2024-04-24 19:35:55.429521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:29.948 [2024-04-24 19:35:55.429528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.948 [2024-04-24 19:35:55.469904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.948 [2024-04-24 19:35:55.469958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:29.948 [2024-04-24 19:35:55.469971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.436 ms 00:21:29.948 [2024-04-24 19:35:55.469979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.948 [2024-04-24 19:35:55.508906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.948 [2024-04-24 19:35:55.508988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:29.948 [2024-04-24 19:35:55.509002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.949 ms 00:21:29.948 [2024-04-24 19:35:55.509010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.948 [2024-04-24 19:35:55.548730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.948 [2024-04-24 19:35:55.548782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:29.948 [2024-04-24 19:35:55.548812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.731 ms 00:21:29.948 [2024-04-24 19:35:55.548820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.948 [2024-04-24 19:35:55.591070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.948 [2024-04-24 19:35:55.591124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:29.948 [2024-04-24 19:35:55.591138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.216 ms 00:21:29.948 [2024-04-24 19:35:55.591161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.948 [2024-04-24 19:35:55.591226] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:29.948 [2024-04-24 19:35:55.591248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:29.948 [2024-04-24 19:35:55.591271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:29.948 [2024-04-24 19:35:55.591283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:29.948 [2024-04-24 19:35:55.591295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:29.948 [2024-04-24 19:35:55.591307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:29.948 [2024-04-24 19:35:55.591319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:29.948 [2024-04-24 19:35:55.591330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:29.948 [2024-04-24 19:35:55.591342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:29.948 [2024-04-24 19:35:55.591355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:29.948 [2024-04-24 19:35:55.591367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.591992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:29.949 [2024-04-24 19:35:55.592159] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:29.949 [2024-04-24 19:35:55.592167] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 23b7d3ce-0cda-4f08-8b35-d9af4c3def74 00:21:29.949 [2024-04-24 19:35:55.592175] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:29.949 [2024-04-24 19:35:55.592183] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:29.949 [2024-04-24 19:35:55.592190] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:29.949 [2024-04-24 19:35:55.592204] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:29.949 [2024-04-24 19:35:55.592226] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:29.949 [2024-04-24 19:35:55.592234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:29.950 [2024-04-24 19:35:55.592242] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:29.950 [2024-04-24 19:35:55.592249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:29.950 [2024-04-24 19:35:55.592256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:29.950 [2024-04-24 19:35:55.592264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.950 [2024-04-24 19:35:55.592272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:29.950 [2024-04-24 19:35:55.592281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:21:29.950 [2024-04-24 19:35:55.592288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.950 [2024-04-24 19:35:55.615408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.950 [2024-04-24 19:35:55.615453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:29.950 [2024-04-24 19:35:55.615473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.120 ms 00:21:29.950 [2024-04-24 19:35:55.615482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.950 [2024-04-24 19:35:55.615735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.950 [2024-04-24 19:35:55.615748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:29.950 [2024-04-24 19:35:55.615758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:21:29.950 [2024-04-24 19:35:55.615765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.209 [2024-04-24 19:35:55.673277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.209 [2024-04-24 19:35:55.673331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:30.210 [2024-04-24 19:35:55.673343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.210 [2024-04-24 19:35:55.673350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.210 [2024-04-24 19:35:55.673416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.210 [2024-04-24 19:35:55.673424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:30.210 [2024-04-24 19:35:55.673431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.210 [2024-04-24 19:35:55.673438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.210 [2024-04-24 19:35:55.673509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.210 [2024-04-24 19:35:55.673519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:30.210 [2024-04-24 19:35:55.673532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.210 [2024-04-24 19:35:55.673539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.210 [2024-04-24 19:35:55.673555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.210 [2024-04-24 19:35:55.673562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:30.210 [2024-04-24 19:35:55.673569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.210 [2024-04-24 19:35:55.673576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.210 [2024-04-24 19:35:55.794215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.210 [2024-04-24 19:35:55.794291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:30.210 [2024-04-24 19:35:55.794304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.210 [2024-04-24 19:35:55.794312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.210 [2024-04-24 19:35:55.843050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.210 [2024-04-24 19:35:55.843134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:30.210 [2024-04-24 19:35:55.843149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.210 [2024-04-24 19:35:55.843158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.210 [2024-04-24 19:35:55.843229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.210 [2024-04-24 19:35:55.843242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:30.210 [2024-04-24 19:35:55.843255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.210 [2024-04-24 19:35:55.843279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.210 [2024-04-24 19:35:55.843320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.210 [2024-04-24 19:35:55.843332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:30.210 [2024-04-24 19:35:55.843343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.210 [2024-04-24 19:35:55.843353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.210 [2024-04-24 19:35:55.843608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.210 [2024-04-24 19:35:55.843653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:30.210 [2024-04-24 19:35:55.843664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.210 [2024-04-24 19:35:55.843672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.210 [2024-04-24 19:35:55.843737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.210 [2024-04-24 19:35:55.843749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:30.210 [2024-04-24 19:35:55.843758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.210 [2024-04-24 19:35:55.843766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.210 [2024-04-24 19:35:55.843806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.210 [2024-04-24 19:35:55.843815] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:30.210 [2024-04-24 19:35:55.843823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.210 [2024-04-24 19:35:55.843832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.210 [2024-04-24 19:35:55.843881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:30.210 [2024-04-24 19:35:55.843892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:30.210 [2024-04-24 19:35:55.843900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:30.210 [2024-04-24 19:35:55.843908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.210 [2024-04-24 19:35:55.844035] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 506.865 ms, result 0 00:21:32.113 00:21:32.113 00:21:32.113 19:35:57 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:32.113 [2024-04-24 19:35:57.743718] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:21:32.114 [2024-04-24 19:35:57.743838] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80669 ] 00:21:32.372 [2024-04-24 19:35:57.905352] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.631 [2024-04-24 19:35:58.155361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:32.890 [2024-04-24 19:35:58.557906] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:32.890 [2024-04-24 19:35:58.557973] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:33.149 [2024-04-24 19:35:58.707591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.150 [2024-04-24 19:35:58.707673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:33.150 [2024-04-24 19:35:58.707688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:33.150 [2024-04-24 19:35:58.707695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.150 [2024-04-24 19:35:58.707750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.150 [2024-04-24 19:35:58.707761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:33.150 [2024-04-24 19:35:58.707769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:33.150 [2024-04-24 19:35:58.707776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.150 [2024-04-24 19:35:58.707795] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:33.150 [2024-04-24 19:35:58.708989] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:33.150 [2024-04-24 19:35:58.709016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.150 [2024-04-24 19:35:58.709023] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:33.150 [2024-04-24 19:35:58.709032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:21:33.150 [2024-04-24 19:35:58.709039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.150 [2024-04-24 19:35:58.710426] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:33.150 [2024-04-24 19:35:58.730450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.150 [2024-04-24 19:35:58.730486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:33.150 [2024-04-24 19:35:58.730503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.062 ms 00:21:33.150 [2024-04-24 19:35:58.730510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.150 [2024-04-24 19:35:58.730589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.150 [2024-04-24 19:35:58.730599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:33.150 [2024-04-24 19:35:58.730608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:33.150 [2024-04-24 19:35:58.730615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.150 [2024-04-24 19:35:58.739115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.150 [2024-04-24 19:35:58.739171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:33.150 [2024-04-24 19:35:58.739187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.444 ms 00:21:33.150 [2024-04-24 19:35:58.739222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.150 [2024-04-24 19:35:58.739344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.150 [2024-04-24 19:35:58.739362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:33.150 [2024-04-24 19:35:58.739375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:21:33.150 [2024-04-24 19:35:58.739386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.150 [2024-04-24 19:35:58.739449] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.150 [2024-04-24 19:35:58.739467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:33.150 [2024-04-24 19:35:58.739480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:33.150 [2024-04-24 19:35:58.739491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.150 [2024-04-24 19:35:58.739526] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:33.150 [2024-04-24 19:35:58.744610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.150 [2024-04-24 19:35:58.744673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:33.150 [2024-04-24 19:35:58.744691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.103 ms 00:21:33.150 [2024-04-24 19:35:58.744719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.150 [2024-04-24 19:35:58.744760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.150 [2024-04-24 19:35:58.744771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:33.150 [2024-04-24 19:35:58.744784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:33.150 [2024-04-24 19:35:58.744797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.150 [2024-04-24 19:35:58.744858] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:33.150 [2024-04-24 19:35:58.744896] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:33.150 [2024-04-24 19:35:58.744966] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:33.150 [2024-04-24 19:35:58.744990] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:33.150 [2024-04-24 19:35:58.745065] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:33.150 [2024-04-24 19:35:58.745083] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:33.150 [2024-04-24 19:35:58.745097] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:33.150 [2024-04-24 19:35:58.745112] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:33.150 [2024-04-24 19:35:58.745124] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:33.150 [2024-04-24 19:35:58.745142] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:33.150 [2024-04-24 19:35:58.745153] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:33.150 [2024-04-24 19:35:58.745165] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:33.150 [2024-04-24 19:35:58.745175] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:33.150 [2024-04-24 19:35:58.745187] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.150 [2024-04-24 19:35:58.745198] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:33.150 [2024-04-24 19:35:58.745211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.333 ms 00:21:33.150 [2024-04-24 19:35:58.745222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.150 [2024-04-24 19:35:58.745297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.150 [2024-04-24 19:35:58.745317] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:33.150 [2024-04-24 19:35:58.745333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:33.150 [2024-04-24 19:35:58.745344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.150 [2024-04-24 19:35:58.745424] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:33.150 [2024-04-24 19:35:58.745463] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:33.150 [2024-04-24 19:35:58.745476] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:33.150 [2024-04-24 19:35:58.745488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.150 [2024-04-24 19:35:58.745500] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:33.150 [2024-04-24 19:35:58.745513] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:33.150 [2024-04-24 19:35:58.745524] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:33.150 [2024-04-24 19:35:58.745535] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:33.150 [2024-04-24 19:35:58.745546] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:33.150 [2024-04-24 19:35:58.745556] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:33.150 [2024-04-24 19:35:58.745567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:33.150 [2024-04-24 19:35:58.745578] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:33.150 [2024-04-24 19:35:58.745604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:33.150 [2024-04-24 19:35:58.745615] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:33.150 [2024-04-24 19:35:58.745626] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:33.150 [2024-04-24 19:35:58.745652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.150 [2024-04-24 19:35:58.745663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:33.150 [2024-04-24 19:35:58.745673] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:33.150 [2024-04-24 19:35:58.745683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.150 [2024-04-24 19:35:58.745694] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:33.150 [2024-04-24 19:35:58.745704] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:33.150 [2024-04-24 19:35:58.745712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:33.150 [2024-04-24 19:35:58.745723] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:33.150 [2024-04-24 19:35:58.745732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:33.150 [2024-04-24 19:35:58.745743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:33.150 [2024-04-24 19:35:58.745754] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:33.150 [2024-04-24 19:35:58.745765] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:33.150 [2024-04-24 19:35:58.745775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:33.151 [2024-04-24 19:35:58.745785] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:33.151 [2024-04-24 19:35:58.745796] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:33.151 [2024-04-24 19:35:58.745806] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:33.151 [2024-04-24 19:35:58.745816] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:33.151 [2024-04-24 19:35:58.745828] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:33.151 [2024-04-24 19:35:58.745838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:33.151 [2024-04-24 19:35:58.745848] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:33.151 [2024-04-24 19:35:58.745858] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:33.151 [2024-04-24 19:35:58.745868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:33.151 [2024-04-24 19:35:58.745879] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:33.151 [2024-04-24 19:35:58.745890] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:33.151 [2024-04-24 19:35:58.745900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:33.151 [2024-04-24 19:35:58.745910] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:33.151 [2024-04-24 19:35:58.745922] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:33.151 [2024-04-24 19:35:58.745938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:33.151 [2024-04-24 19:35:58.745953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:33.151 [2024-04-24 19:35:58.745963] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:33.151 [2024-04-24 19:35:58.745972] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:33.151 [2024-04-24 19:35:58.745981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:33.151 [2024-04-24 19:35:58.745990] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:33.151 [2024-04-24 19:35:58.746000] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:33.151 [2024-04-24 19:35:58.746011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:33.151 [2024-04-24 19:35:58.746022] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:33.151 [2024-04-24 19:35:58.746035] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:33.151 [2024-04-24 19:35:58.746048] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:33.151 [2024-04-24 19:35:58.746060] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:33.151 [2024-04-24 19:35:58.746071] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:33.151 [2024-04-24 19:35:58.746082] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:33.151 [2024-04-24 19:35:58.746093] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:33.151 [2024-04-24 19:35:58.746105] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:33.151 [2024-04-24 19:35:58.746117] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:33.151 [2024-04-24 19:35:58.746128] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:33.151 [2024-04-24 19:35:58.746139] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:33.151 [2024-04-24 19:35:58.746152] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:33.151 [2024-04-24 19:35:58.746163] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:33.151 [2024-04-24 19:35:58.746175] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:33.151 [2024-04-24 19:35:58.746187] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:33.151 [2024-04-24 19:35:58.746199] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:33.151 [2024-04-24 19:35:58.746211] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:33.151 [2024-04-24 19:35:58.746223] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:33.151 [2024-04-24 19:35:58.746236] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:33.151 [2024-04-24 19:35:58.746248] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:33.151 [2024-04-24 19:35:58.746260] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:33.151 [2024-04-24 19:35:58.746274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.151 [2024-04-24 19:35:58.746285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:33.151 [2024-04-24 19:35:58.746297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.889 ms 00:21:33.151 [2024-04-24 19:35:58.746309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.151 [2024-04-24 19:35:58.769130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.151 [2024-04-24 19:35:58.769191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:33.151 [2024-04-24 19:35:58.769210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.800 ms 00:21:33.151 [2024-04-24 19:35:58.769222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.151 [2024-04-24 19:35:58.769325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.151 [2024-04-24 19:35:58.769347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:33.151 [2024-04-24 19:35:58.769359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:33.151 [2024-04-24 19:35:58.769371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.151 [2024-04-24 19:35:58.822096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.151 [2024-04-24 19:35:58.822163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:33.151 [2024-04-24 19:35:58.822183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.745 ms 00:21:33.151 [2024-04-24 19:35:58.822201] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.151 [2024-04-24 19:35:58.822280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.151 [2024-04-24 19:35:58.822293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:33.151 [2024-04-24 19:35:58.822304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:33.151 [2024-04-24 19:35:58.822314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.151 [2024-04-24 19:35:58.822958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.151 [2024-04-24 19:35:58.822982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:33.151 [2024-04-24 19:35:58.822995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:21:33.151 [2024-04-24 19:35:58.823005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.151 [2024-04-24 19:35:58.823160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.151 [2024-04-24 19:35:58.823185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:33.151 [2024-04-24 19:35:58.823197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:21:33.151 [2024-04-24 19:35:58.823207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.411 [2024-04-24 19:35:58.844719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.411 [2024-04-24 19:35:58.844769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:33.411 [2024-04-24 19:35:58.844798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.509 ms 00:21:33.411 [2024-04-24 19:35:58.844806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.411 [2024-04-24 19:35:58.865051] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:33.411 [2024-04-24 19:35:58.865096] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:33.411 [2024-04-24 19:35:58.865109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.411 [2024-04-24 19:35:58.865117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:33.411 [2024-04-24 19:35:58.865127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.208 ms 00:21:33.411 [2024-04-24 19:35:58.865134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.411 [2024-04-24 19:35:58.896348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.411 [2024-04-24 19:35:58.896397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:33.411 [2024-04-24 19:35:58.896425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.220 ms 00:21:33.411 [2024-04-24 19:35:58.896433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.411 [2024-04-24 19:35:58.915657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.411 [2024-04-24 19:35:58.915699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:33.411 [2024-04-24 19:35:58.915710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.211 ms 00:21:33.411 [2024-04-24 19:35:58.915729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.411 [2024-04-24 19:35:58.934331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.411 [2024-04-24 19:35:58.934369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:33.411 [2024-04-24 19:35:58.934381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.596 ms 00:21:33.411 [2024-04-24 19:35:58.934387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.411 [2024-04-24 19:35:58.934878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.411 [2024-04-24 19:35:58.934898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:33.411 [2024-04-24 19:35:58.934907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:21:33.411 [2024-04-24 19:35:58.934915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.411 [2024-04-24 19:35:59.028415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.411 [2024-04-24 19:35:59.028480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:33.411 [2024-04-24 19:35:59.028494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.662 ms 00:21:33.411 [2024-04-24 19:35:59.028501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.411 [2024-04-24 19:35:59.042575] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:33.411 [2024-04-24 19:35:59.046121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.411 [2024-04-24 19:35:59.046162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:33.411 [2024-04-24 19:35:59.046176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.576 ms 00:21:33.411 [2024-04-24 19:35:59.046185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.411 [2024-04-24 19:35:59.046290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.411 [2024-04-24 19:35:59.046313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:33.411 [2024-04-24 19:35:59.046324] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:33.411 [2024-04-24 19:35:59.046331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.411 [2024-04-24 19:35:59.046397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.411 [2024-04-24 19:35:59.046413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:33.411 [2024-04-24 19:35:59.046422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:33.411 [2024-04-24 19:35:59.046430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.411 [2024-04-24 19:35:59.048260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.411 [2024-04-24 19:35:59.048294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:33.411 [2024-04-24 19:35:59.048307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.817 ms 00:21:33.411 [2024-04-24 19:35:59.048315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.411 [2024-04-24 19:35:59.048349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.411 [2024-04-24 19:35:59.048359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:33.411 [2024-04-24 19:35:59.048367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:33.411 [2024-04-24 19:35:59.048375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.411 [2024-04-24 19:35:59.048409] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:33.411 [2024-04-24 19:35:59.048419] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.411 [2024-04-24 19:35:59.048428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:33.411 [2024-04-24 19:35:59.048435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:33.411 [2024-04-24 19:35:59.048446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.670 [2024-04-24 19:35:59.088867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.670 [2024-04-24 19:35:59.088915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:33.670 [2024-04-24 19:35:59.088928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.480 ms 00:21:33.670 [2024-04-24 19:35:59.088935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.670 [2024-04-24 19:35:59.089038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:33.670 [2024-04-24 19:35:59.089054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:33.670 [2024-04-24 19:35:59.089063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:33.670 [2024-04-24 19:35:59.089071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:33.670 [2024-04-24 19:35:59.090265] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 382.911 ms, result 0 00:22:02.620  Copying: 34/1024 [MB] (34 MBps) Copying: 69/1024 [MB] (34 MBps) Copying: 106/1024 [MB] (36 MBps) Copying: 142/1024 [MB] (35 MBps) Copying: 179/1024 [MB] (37 MBps) Copying: 215/1024 [MB] (35 MBps) Copying: 250/1024 [MB] (35 MBps) Copying: 286/1024 [MB] (35 MBps) Copying: 322/1024 [MB] (35 MBps) Copying: 360/1024 [MB] (37 MBps) Copying: 394/1024 [MB] (34 MBps) Copying: 430/1024 [MB] (35 MBps) Copying: 465/1024 [MB] (35 MBps) Copying: 501/1024 [MB] (36 MBps) Copying: 536/1024 [MB] (34 MBps) Copying: 570/1024 [MB] (34 MBps) Copying: 607/1024 [MB] (37 MBps) Copying: 642/1024 [MB] (35 MBps) Copying: 679/1024 [MB] (36 MBps) Copying: 715/1024 [MB] (36 MBps) Copying: 752/1024 [MB] (36 MBps) Copying: 789/1024 [MB] (36 MBps) Copying: 823/1024 [MB] (34 MBps) Copying: 858/1024 [MB] (35 MBps) Copying: 894/1024 [MB] (35 MBps) Copying: 930/1024 [MB] (36 MBps) Copying: 966/1024 [MB] (35 MBps) Copying: 1002/1024 [MB] (35 MBps) Copying: 1024/1024 [MB] (average 35 MBps)[2024-04-24 19:36:28.271978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.620 [2024-04-24 19:36:28.272058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:02.620 [2024-04-24 19:36:28.272074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:02.620 [2024-04-24 19:36:28.272083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.620 [2024-04-24 19:36:28.272108] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:02.620 [2024-04-24 19:36:28.277112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.620 [2024-04-24 19:36:28.277189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:02.620 [2024-04-24 19:36:28.277207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.994 ms 00:22:02.620 [2024-04-24 19:36:28.277215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.620 [2024-04-24 19:36:28.277477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.620 [2024-04-24 19:36:28.277496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:02.620 [2024-04-24 19:36:28.277506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:22:02.620 [2024-04-24 19:36:28.277515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.620 [2024-04-24 19:36:28.280935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.620 [2024-04-24 19:36:28.280962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:02.620 [2024-04-24 19:36:28.280972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.413 ms 00:22:02.620 [2024-04-24 19:36:28.280981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.620 [2024-04-24 19:36:28.287426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.620 [2024-04-24 19:36:28.287482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:02.620 [2024-04-24 19:36:28.287503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.436 ms 00:22:02.620 [2024-04-24 19:36:28.287510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.878 [2024-04-24 19:36:28.339029] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.878 [2024-04-24 19:36:28.339099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:02.879 [2024-04-24 19:36:28.339114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.546 ms 00:22:02.879 [2024-04-24 19:36:28.339123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.879 [2024-04-24 19:36:28.366374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.879 [2024-04-24 19:36:28.366454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:02.879 [2024-04-24 19:36:28.366473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.226 ms 00:22:02.879 [2024-04-24 19:36:28.366483] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.879 [2024-04-24 19:36:28.366683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.879 [2024-04-24 19:36:28.366710] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:02.879 [2024-04-24 19:36:28.366720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:22:02.879 [2024-04-24 19:36:28.366729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.879 [2024-04-24 19:36:28.414911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.879 [2024-04-24 19:36:28.414973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:02.879 [2024-04-24 19:36:28.414988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.249 ms 00:22:02.879 [2024-04-24 19:36:28.414996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.879 [2024-04-24 19:36:28.463380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.879 [2024-04-24 19:36:28.463448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:02.879 [2024-04-24 19:36:28.463462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.391 ms 00:22:02.879 [2024-04-24 19:36:28.463471] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.879 [2024-04-24 19:36:28.508260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.879 [2024-04-24 19:36:28.508330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:02.879 [2024-04-24 19:36:28.508345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.793 ms 00:22:02.879 [2024-04-24 19:36:28.508353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.139 [2024-04-24 19:36:28.554753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.139 [2024-04-24 19:36:28.554824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:03.139 [2024-04-24 19:36:28.554839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.351 ms 00:22:03.139 [2024-04-24 19:36:28.554846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.139 [2024-04-24 19:36:28.554932] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:03.139 [2024-04-24 19:36:28.554951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.554961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.554969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.554977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.554985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.554993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:03.139 [2024-04-24 19:36:28.555560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:03.140 [2024-04-24 19:36:28.555905] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:03.140 [2024-04-24 19:36:28.555914] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 23b7d3ce-0cda-4f08-8b35-d9af4c3def74 00:22:03.140 [2024-04-24 19:36:28.555923] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:03.140 [2024-04-24 19:36:28.555931] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:03.140 [2024-04-24 19:36:28.555949] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:03.140 [2024-04-24 19:36:28.555974] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:03.140 [2024-04-24 19:36:28.555991] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:03.140 [2024-04-24 19:36:28.556000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:03.140 [2024-04-24 19:36:28.556008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:03.140 [2024-04-24 19:36:28.556015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:03.140 [2024-04-24 19:36:28.556022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:03.140 [2024-04-24 19:36:28.556031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.140 [2024-04-24 19:36:28.556040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:03.140 [2024-04-24 19:36:28.556050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.102 ms 00:22:03.140 [2024-04-24 19:36:28.556058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.140 [2024-04-24 19:36:28.577609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.140 [2024-04-24 19:36:28.577714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:03.140 [2024-04-24 19:36:28.577728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.529 ms 00:22:03.140 [2024-04-24 19:36:28.577737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.140 [2024-04-24 19:36:28.578020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.140 [2024-04-24 19:36:28.578029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:03.140 [2024-04-24 19:36:28.578038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:22:03.140 [2024-04-24 19:36:28.578046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.140 [2024-04-24 19:36:28.638084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.140 [2024-04-24 19:36:28.638137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:03.140 [2024-04-24 19:36:28.638149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.140 [2024-04-24 19:36:28.638157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.140 [2024-04-24 19:36:28.638232] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.140 [2024-04-24 19:36:28.638240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:03.140 [2024-04-24 19:36:28.638248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.140 [2024-04-24 19:36:28.638255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.140 [2024-04-24 19:36:28.638323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.140 [2024-04-24 19:36:28.638338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:03.140 [2024-04-24 19:36:28.638345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.140 [2024-04-24 19:36:28.638353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.140 [2024-04-24 19:36:28.638369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.140 [2024-04-24 19:36:28.638377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:03.140 [2024-04-24 19:36:28.638384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.140 [2024-04-24 19:36:28.638391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.140 [2024-04-24 19:36:28.768847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.140 [2024-04-24 19:36:28.768905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:03.140 [2024-04-24 19:36:28.768920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.140 [2024-04-24 19:36:28.768928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.399 [2024-04-24 19:36:28.818725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.399 [2024-04-24 19:36:28.818786] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:03.399 [2024-04-24 19:36:28.818798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.399 [2024-04-24 19:36:28.818807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.399 [2024-04-24 19:36:28.818871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.399 [2024-04-24 19:36:28.818879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:03.399 [2024-04-24 19:36:28.818901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.399 [2024-04-24 19:36:28.818908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.399 [2024-04-24 19:36:28.818937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.399 [2024-04-24 19:36:28.818948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:03.399 [2024-04-24 19:36:28.818956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.399 [2024-04-24 19:36:28.818962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.399 [2024-04-24 19:36:28.819061] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.399 [2024-04-24 19:36:28.819074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:03.399 [2024-04-24 19:36:28.819082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.399 [2024-04-24 19:36:28.819093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.399 [2024-04-24 19:36:28.819132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.399 [2024-04-24 19:36:28.819145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:03.399 [2024-04-24 19:36:28.819153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.399 [2024-04-24 19:36:28.819160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.399 [2024-04-24 19:36:28.819194] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.399 [2024-04-24 19:36:28.819203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:03.399 [2024-04-24 19:36:28.819210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.399 [2024-04-24 19:36:28.819247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.399 [2024-04-24 19:36:28.819293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:03.399 [2024-04-24 19:36:28.819303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:03.399 [2024-04-24 19:36:28.819312] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:03.399 [2024-04-24 19:36:28.819321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.399 [2024-04-24 19:36:28.819440] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 548.490 ms, result 0 00:22:04.775 00:22:04.775 00:22:04.775 19:36:30 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:06.685 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:06.685 19:36:32 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:06.685 [2024-04-24 19:36:32.078357] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:22:06.685 [2024-04-24 19:36:32.078482] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81022 ] 00:22:06.685 [2024-04-24 19:36:32.240057] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.944 [2024-04-24 19:36:32.481289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.514 [2024-04-24 19:36:32.922633] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:07.514 [2024-04-24 19:36:32.922721] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:07.514 [2024-04-24 19:36:33.076695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.514 [2024-04-24 19:36:33.076753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:07.514 [2024-04-24 19:36:33.076766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:07.514 [2024-04-24 19:36:33.076774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.514 [2024-04-24 19:36:33.076829] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.514 [2024-04-24 19:36:33.076841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:07.514 [2024-04-24 19:36:33.076849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:07.514 [2024-04-24 19:36:33.076856] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.514 [2024-04-24 19:36:33.076875] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:07.514 [2024-04-24 19:36:33.078178] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:07.514 [2024-04-24 19:36:33.078213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.514 [2024-04-24 19:36:33.078222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:07.514 [2024-04-24 19:36:33.078231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:22:07.514 [2024-04-24 19:36:33.078239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.514 [2024-04-24 19:36:33.079679] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:07.514 [2024-04-24 19:36:33.100672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.514 [2024-04-24 19:36:33.100708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:07.514 [2024-04-24 19:36:33.100724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.034 ms 00:22:07.514 [2024-04-24 19:36:33.100732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.514 [2024-04-24 19:36:33.100786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.514 [2024-04-24 19:36:33.100795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:07.514 [2024-04-24 19:36:33.100803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:07.514 [2024-04-24 19:36:33.100811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.514 [2024-04-24 19:36:33.107597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.514 [2024-04-24 19:36:33.107626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:07.514 [2024-04-24 19:36:33.107642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.744 ms 00:22:07.514 [2024-04-24 19:36:33.107649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.514 [2024-04-24 19:36:33.107736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.514 [2024-04-24 19:36:33.107750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:07.514 [2024-04-24 19:36:33.107759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:07.514 [2024-04-24 19:36:33.107767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.514 [2024-04-24 19:36:33.107807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.514 [2024-04-24 19:36:33.107818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:07.514 [2024-04-24 19:36:33.107825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:07.514 [2024-04-24 19:36:33.107832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.514 [2024-04-24 19:36:33.107854] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:07.514 [2024-04-24 19:36:33.114127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.514 [2024-04-24 19:36:33.114155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:07.514 [2024-04-24 19:36:33.114164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.292 ms 00:22:07.514 [2024-04-24 19:36:33.114171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.514 [2024-04-24 19:36:33.114200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.514 [2024-04-24 19:36:33.114208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:07.514 [2024-04-24 19:36:33.114216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:07.514 [2024-04-24 19:36:33.114223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.514 [2024-04-24 19:36:33.114283] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:07.514 [2024-04-24 19:36:33.114309] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:22:07.514 [2024-04-24 19:36:33.114343] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:07.514 [2024-04-24 19:36:33.114358] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:22:07.514 [2024-04-24 19:36:33.114431] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:07.514 [2024-04-24 19:36:33.114453] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:07.514 [2024-04-24 19:36:33.114464] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:07.514 [2024-04-24 19:36:33.114475] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:07.514 [2024-04-24 19:36:33.114485] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:07.514 [2024-04-24 19:36:33.114497] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:07.514 [2024-04-24 19:36:33.114505] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:07.514 [2024-04-24 19:36:33.114513] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:07.514 [2024-04-24 19:36:33.114520] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:07.514 [2024-04-24 19:36:33.114528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.514 [2024-04-24 19:36:33.114536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:07.514 [2024-04-24 19:36:33.114544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:22:07.514 [2024-04-24 19:36:33.114551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.514 [2024-04-24 19:36:33.114611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.514 [2024-04-24 19:36:33.114620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:07.514 [2024-04-24 19:36:33.114641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:22:07.514 [2024-04-24 19:36:33.114650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.514 [2024-04-24 19:36:33.114723] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:07.514 [2024-04-24 19:36:33.114737] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:07.514 [2024-04-24 19:36:33.114746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:07.514 [2024-04-24 19:36:33.114754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.514 [2024-04-24 19:36:33.114763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:07.514 [2024-04-24 19:36:33.114770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:07.515 [2024-04-24 19:36:33.114778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:07.515 [2024-04-24 19:36:33.114786] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:07.515 [2024-04-24 19:36:33.114795] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:07.515 [2024-04-24 19:36:33.114802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:07.515 [2024-04-24 19:36:33.114809] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:07.515 [2024-04-24 19:36:33.114816] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:07.515 [2024-04-24 19:36:33.114836] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:07.515 [2024-04-24 19:36:33.114844] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:07.515 [2024-04-24 19:36:33.114851] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:07.515 [2024-04-24 19:36:33.114858] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.515 [2024-04-24 19:36:33.114866] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:07.515 [2024-04-24 19:36:33.114873] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:07.515 [2024-04-24 19:36:33.114880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.515 [2024-04-24 19:36:33.114887] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:07.515 [2024-04-24 19:36:33.114894] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:07.515 [2024-04-24 19:36:33.114902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:07.515 [2024-04-24 19:36:33.114910] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:07.515 [2024-04-24 19:36:33.114917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:07.515 [2024-04-24 19:36:33.114924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:07.515 [2024-04-24 19:36:33.114931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:07.515 [2024-04-24 19:36:33.114938] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:07.515 [2024-04-24 19:36:33.114945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:07.515 [2024-04-24 19:36:33.114953] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:07.515 [2024-04-24 19:36:33.114960] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:07.515 [2024-04-24 19:36:33.114966] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:07.515 [2024-04-24 19:36:33.114973] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:07.515 [2024-04-24 19:36:33.114980] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:07.515 [2024-04-24 19:36:33.114987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:07.515 [2024-04-24 19:36:33.114993] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:07.515 [2024-04-24 19:36:33.115000] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:07.515 [2024-04-24 19:36:33.115007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:07.515 [2024-04-24 19:36:33.115014] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:07.515 [2024-04-24 19:36:33.115021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:07.515 [2024-04-24 19:36:33.115028] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:07.515 [2024-04-24 19:36:33.115036] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:07.515 [2024-04-24 19:36:33.115044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:07.515 [2024-04-24 19:36:33.115055] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:07.515 [2024-04-24 19:36:33.115068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.515 [2024-04-24 19:36:33.115076] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:07.515 [2024-04-24 19:36:33.115083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:07.515 [2024-04-24 19:36:33.115090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:07.515 [2024-04-24 19:36:33.115098] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:07.515 [2024-04-24 19:36:33.115104] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:07.515 [2024-04-24 19:36:33.115112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:07.515 [2024-04-24 19:36:33.115121] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:07.515 [2024-04-24 19:36:33.115132] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:07.515 [2024-04-24 19:36:33.115141] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:07.515 [2024-04-24 19:36:33.115149] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:07.515 [2024-04-24 19:36:33.115157] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:07.515 [2024-04-24 19:36:33.115165] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:07.515 [2024-04-24 19:36:33.115172] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:07.515 [2024-04-24 19:36:33.115180] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:07.515 [2024-04-24 19:36:33.115188] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:07.515 [2024-04-24 19:36:33.115196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:07.515 [2024-04-24 19:36:33.115204] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:07.515 [2024-04-24 19:36:33.115211] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:07.515 [2024-04-24 19:36:33.115226] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:07.515 [2024-04-24 19:36:33.115234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:07.515 [2024-04-24 19:36:33.115243] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:07.515 [2024-04-24 19:36:33.115277] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:07.515 [2024-04-24 19:36:33.115292] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:07.515 [2024-04-24 19:36:33.115301] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:07.515 [2024-04-24 19:36:33.115310] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:07.515 [2024-04-24 19:36:33.115319] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:07.515 [2024-04-24 19:36:33.115328] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:07.515 [2024-04-24 19:36:33.115337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.515 [2024-04-24 19:36:33.115345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:07.515 [2024-04-24 19:36:33.115354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:22:07.515 [2024-04-24 19:36:33.115362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.515 [2024-04-24 19:36:33.141233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.515 [2024-04-24 19:36:33.141266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:07.515 [2024-04-24 19:36:33.141278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.875 ms 00:22:07.515 [2024-04-24 19:36:33.141286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.515 [2024-04-24 19:36:33.141363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.515 [2024-04-24 19:36:33.141376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:07.515 [2024-04-24 19:36:33.141383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:07.515 [2024-04-24 19:36:33.141390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.775 [2024-04-24 19:36:33.208830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.775 [2024-04-24 19:36:33.208875] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:07.775 [2024-04-24 19:36:33.208887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.524 ms 00:22:07.775 [2024-04-24 19:36:33.208898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.775 [2024-04-24 19:36:33.208949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.775 [2024-04-24 19:36:33.208959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:07.775 [2024-04-24 19:36:33.208968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:07.775 [2024-04-24 19:36:33.208975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.775 [2024-04-24 19:36:33.209429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.775 [2024-04-24 19:36:33.209449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:07.775 [2024-04-24 19:36:33.209458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:22:07.775 [2024-04-24 19:36:33.209466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.775 [2024-04-24 19:36:33.209580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.775 [2024-04-24 19:36:33.209599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:07.775 [2024-04-24 19:36:33.209608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:22:07.775 [2024-04-24 19:36:33.209616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.775 [2024-04-24 19:36:33.232694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.775 [2024-04-24 19:36:33.232737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:07.775 [2024-04-24 19:36:33.232750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.088 ms 00:22:07.775 [2024-04-24 19:36:33.232758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.775 [2024-04-24 19:36:33.252335] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:07.775 [2024-04-24 19:36:33.252370] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:07.775 [2024-04-24 19:36:33.252381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.775 [2024-04-24 19:36:33.252388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:07.775 [2024-04-24 19:36:33.252397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.542 ms 00:22:07.775 [2024-04-24 19:36:33.252405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.775 [2024-04-24 19:36:33.283039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.775 [2024-04-24 19:36:33.283074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:07.775 [2024-04-24 19:36:33.283085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.655 ms 00:22:07.775 [2024-04-24 19:36:33.283093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.775 [2024-04-24 19:36:33.302539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.775 [2024-04-24 19:36:33.302570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:07.775 [2024-04-24 19:36:33.302580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.428 ms 00:22:07.775 [2024-04-24 19:36:33.302688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.775 [2024-04-24 19:36:33.321179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.775 [2024-04-24 19:36:33.321208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:07.775 [2024-04-24 19:36:33.321217] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.492 ms 00:22:07.775 [2024-04-24 19:36:33.321224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.775 [2024-04-24 19:36:33.321764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.775 [2024-04-24 19:36:33.321785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:07.775 [2024-04-24 19:36:33.321794] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:22:07.775 [2024-04-24 19:36:33.321802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.775 [2024-04-24 19:36:33.412605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.776 [2024-04-24 19:36:33.412667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:07.776 [2024-04-24 19:36:33.412681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.959 ms 00:22:07.776 [2024-04-24 19:36:33.412689] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.776 [2024-04-24 19:36:33.425257] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:07.776 [2024-04-24 19:36:33.428408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.776 [2024-04-24 19:36:33.428436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:07.776 [2024-04-24 19:36:33.428447] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.689 ms 00:22:07.776 [2024-04-24 19:36:33.428454] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.776 [2024-04-24 19:36:33.428538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.776 [2024-04-24 19:36:33.428554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:07.776 [2024-04-24 19:36:33.428563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:07.776 [2024-04-24 19:36:33.428570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.776 [2024-04-24 19:36:33.428630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.776 [2024-04-24 19:36:33.428651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:07.776 [2024-04-24 19:36:33.428659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:07.776 [2024-04-24 19:36:33.428667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.776 [2024-04-24 19:36:33.430396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.776 [2024-04-24 19:36:33.430422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:07.776 [2024-04-24 19:36:33.430433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.717 ms 00:22:07.776 [2024-04-24 19:36:33.430441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.776 [2024-04-24 19:36:33.430467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.776 [2024-04-24 19:36:33.430475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:07.776 [2024-04-24 19:36:33.430483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:07.776 [2024-04-24 19:36:33.430490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.776 [2024-04-24 19:36:33.430520] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:07.776 [2024-04-24 19:36:33.430546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.776 [2024-04-24 19:36:33.430554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:07.776 [2024-04-24 19:36:33.430563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:22:07.776 [2024-04-24 19:36:33.430574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.035 [2024-04-24 19:36:33.471015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.035 [2024-04-24 19:36:33.471056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:08.035 [2024-04-24 19:36:33.471068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.500 ms 00:22:08.035 [2024-04-24 19:36:33.471076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.035 [2024-04-24 19:36:33.471151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:08.035 [2024-04-24 19:36:33.471166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:08.035 [2024-04-24 19:36:33.471174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:22:08.035 [2024-04-24 19:36:33.471182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:08.035 [2024-04-24 19:36:33.472364] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.949 ms, result 0 00:22:41.034  Copying: 35/1024 [MB] (35 MBps) Copying: 69/1024 [MB] (34 MBps) Copying: 104/1024 [MB] (34 MBps) Copying: 137/1024 [MB] (33 MBps) Copying: 170/1024 [MB] (32 MBps) Copying: 202/1024 [MB] (32 MBps) Copying: 234/1024 [MB] (32 MBps) Copying: 266/1024 [MB] (32 MBps) Copying: 298/1024 [MB] (31 MBps) Copying: 330/1024 [MB] (31 MBps) Copying: 362/1024 [MB] (32 MBps) Copying: 395/1024 [MB] (32 MBps) Copying: 429/1024 [MB] (34 MBps) Copying: 461/1024 [MB] (31 MBps) Copying: 496/1024 [MB] (34 MBps) Copying: 529/1024 [MB] (32 MBps) Copying: 562/1024 [MB] (33 MBps) Copying: 595/1024 [MB] (33 MBps) Copying: 628/1024 [MB] (32 MBps) Copying: 661/1024 [MB] (32 MBps) Copying: 692/1024 [MB] (31 MBps) Copying: 722/1024 [MB] (29 MBps) Copying: 750/1024 [MB] (28 MBps) Copying: 779/1024 [MB] (28 MBps) Copying: 809/1024 [MB] (30 MBps) Copying: 839/1024 [MB] (29 MBps) Copying: 867/1024 [MB] (28 MBps) Copying: 895/1024 [MB] (28 MBps) Copying: 924/1024 [MB] (28 MBps) Copying: 952/1024 [MB] (27 MBps) Copying: 982/1024 [MB] (29 MBps) Copying: 1012/1024 [MB] (30 MBps) Copying: 1023/1024 [MB] (10 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-04-24 19:37:06.677766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.034 [2024-04-24 19:37:06.677840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:41.034 [2024-04-24 19:37:06.677855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:41.034 [2024-04-24 19:37:06.677863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.034 [2024-04-24 19:37:06.680411] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:41.034 [2024-04-24 19:37:06.684775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.034 [2024-04-24 19:37:06.684809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:41.034 [2024-04-24 19:37:06.684820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.327 ms 00:22:41.034 [2024-04-24 19:37:06.684827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.034 [2024-04-24 19:37:06.695800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.034 [2024-04-24 19:37:06.695835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:41.034 [2024-04-24 19:37:06.695847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.040 ms 00:22:41.034 [2024-04-24 19:37:06.695854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.293 [2024-04-24 19:37:06.718796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.293 [2024-04-24 19:37:06.718849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:41.293 [2024-04-24 19:37:06.718873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.951 ms 00:22:41.293 [2024-04-24 19:37:06.718881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.293 [2024-04-24 19:37:06.724192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.293 [2024-04-24 19:37:06.724225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:41.293 [2024-04-24 19:37:06.724234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.286 ms 00:22:41.293 [2024-04-24 19:37:06.724241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.293 [2024-04-24 19:37:06.762680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.293 [2024-04-24 19:37:06.762743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:41.293 [2024-04-24 19:37:06.762756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.476 ms 00:22:41.293 [2024-04-24 19:37:06.762764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.293 [2024-04-24 19:37:06.787155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.293 [2024-04-24 19:37:06.787197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:41.293 [2024-04-24 19:37:06.787209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.366 ms 00:22:41.293 [2024-04-24 19:37:06.787244] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.293 [2024-04-24 19:37:06.897806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.293 [2024-04-24 19:37:06.897882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:41.293 [2024-04-24 19:37:06.897896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 110.707 ms 00:22:41.293 [2024-04-24 19:37:06.897904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.293 [2024-04-24 19:37:06.937663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.293 [2024-04-24 19:37:06.937713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:41.293 [2024-04-24 19:37:06.937726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.815 ms 00:22:41.293 [2024-04-24 19:37:06.937733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.552 [2024-04-24 19:37:06.977089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.552 [2024-04-24 19:37:06.977142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:41.552 [2024-04-24 19:37:06.977156] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.389 ms 00:22:41.552 [2024-04-24 19:37:06.977178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.552 [2024-04-24 19:37:07.018323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.552 [2024-04-24 19:37:07.018376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:41.552 [2024-04-24 19:37:07.018389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.176 ms 00:22:41.552 [2024-04-24 19:37:07.018396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.552 [2024-04-24 19:37:07.058429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.552 [2024-04-24 19:37:07.058482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:41.553 [2024-04-24 19:37:07.058512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.015 ms 00:22:41.553 [2024-04-24 19:37:07.058519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.553 [2024-04-24 19:37:07.058571] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:41.553 [2024-04-24 19:37:07.058596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 114176 / 261120 wr_cnt: 1 state: open 00:22:41.553 [2024-04-24 19:37:07.058606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.058996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:41.553 [2024-04-24 19:37:07.059270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:41.554 [2024-04-24 19:37:07.059434] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:41.554 [2024-04-24 19:37:07.059442] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 23b7d3ce-0cda-4f08-8b35-d9af4c3def74 00:22:41.554 [2024-04-24 19:37:07.059451] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 114176 00:22:41.554 [2024-04-24 19:37:07.059458] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 115136 00:22:41.554 [2024-04-24 19:37:07.059466] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 114176 00:22:41.554 [2024-04-24 19:37:07.059489] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0084 00:22:41.554 [2024-04-24 19:37:07.059497] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:41.554 [2024-04-24 19:37:07.059506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:41.554 [2024-04-24 19:37:07.059516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:41.554 [2024-04-24 19:37:07.059523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:41.554 [2024-04-24 19:37:07.059531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:41.554 [2024-04-24 19:37:07.059539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.554 [2024-04-24 19:37:07.059548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:41.554 [2024-04-24 19:37:07.059561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:22:41.554 [2024-04-24 19:37:07.059570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.554 [2024-04-24 19:37:07.079360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.554 [2024-04-24 19:37:07.079401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:41.554 [2024-04-24 19:37:07.079412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.788 ms 00:22:41.554 [2024-04-24 19:37:07.079420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.554 [2024-04-24 19:37:07.079701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.554 [2024-04-24 19:37:07.079713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:41.554 [2024-04-24 19:37:07.079722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:22:41.554 [2024-04-24 19:37:07.079730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.554 [2024-04-24 19:37:07.133241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.554 [2024-04-24 19:37:07.133289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:41.554 [2024-04-24 19:37:07.133300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.554 [2024-04-24 19:37:07.133323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.554 [2024-04-24 19:37:07.133386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.554 [2024-04-24 19:37:07.133395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:41.554 [2024-04-24 19:37:07.133402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.554 [2024-04-24 19:37:07.133409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.554 [2024-04-24 19:37:07.133470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.554 [2024-04-24 19:37:07.133482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:41.554 [2024-04-24 19:37:07.133490] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.554 [2024-04-24 19:37:07.133497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.554 [2024-04-24 19:37:07.133512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.554 [2024-04-24 19:37:07.133535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:41.554 [2024-04-24 19:37:07.133543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.554 [2024-04-24 19:37:07.133549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.813 [2024-04-24 19:37:07.253956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.813 [2024-04-24 19:37:07.254017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:41.813 [2024-04-24 19:37:07.254029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.813 [2024-04-24 19:37:07.254038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.813 [2024-04-24 19:37:07.300498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.813 [2024-04-24 19:37:07.300551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:41.813 [2024-04-24 19:37:07.300563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.813 [2024-04-24 19:37:07.300571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.813 [2024-04-24 19:37:07.300631] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.813 [2024-04-24 19:37:07.300653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:41.813 [2024-04-24 19:37:07.300661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.813 [2024-04-24 19:37:07.300668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.813 [2024-04-24 19:37:07.300699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.813 [2024-04-24 19:37:07.300708] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:41.813 [2024-04-24 19:37:07.300722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.813 [2024-04-24 19:37:07.300729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.813 [2024-04-24 19:37:07.300832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.813 [2024-04-24 19:37:07.300845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:41.813 [2024-04-24 19:37:07.300852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.813 [2024-04-24 19:37:07.300859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.813 [2024-04-24 19:37:07.300897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.813 [2024-04-24 19:37:07.300907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:41.813 [2024-04-24 19:37:07.300915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.813 [2024-04-24 19:37:07.300926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.813 [2024-04-24 19:37:07.300960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.813 [2024-04-24 19:37:07.300969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:41.813 [2024-04-24 19:37:07.300977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.813 [2024-04-24 19:37:07.300984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.813 [2024-04-24 19:37:07.301027] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.813 [2024-04-24 19:37:07.301036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:41.813 [2024-04-24 19:37:07.301046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.813 [2024-04-24 19:37:07.301053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.813 [2024-04-24 19:37:07.301163] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 626.861 ms, result 0 00:22:43.717 00:22:43.717 00:22:43.717 19:37:09 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:43.717 [2024-04-24 19:37:09.189110] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:22:43.717 [2024-04-24 19:37:09.189229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81396 ] 00:22:43.717 [2024-04-24 19:37:09.349915] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.976 [2024-04-24 19:37:09.595241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.548 [2024-04-24 19:37:10.018702] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:44.548 [2024-04-24 19:37:10.018766] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:44.548 [2024-04-24 19:37:10.167960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.548 [2024-04-24 19:37:10.168021] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:44.548 [2024-04-24 19:37:10.168034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:44.548 [2024-04-24 19:37:10.168043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.548 [2024-04-24 19:37:10.168107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.548 [2024-04-24 19:37:10.168118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:44.548 [2024-04-24 19:37:10.168126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:22:44.548 [2024-04-24 19:37:10.168133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.548 [2024-04-24 19:37:10.168152] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:44.548 [2024-04-24 19:37:10.169353] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:44.548 [2024-04-24 19:37:10.169388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.548 [2024-04-24 19:37:10.169397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:44.548 [2024-04-24 19:37:10.169405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.242 ms 00:22:44.548 [2024-04-24 19:37:10.169413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.548 [2024-04-24 19:37:10.170833] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:44.548 [2024-04-24 19:37:10.190911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.548 [2024-04-24 19:37:10.190969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:44.548 [2024-04-24 19:37:10.190989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.117 ms 00:22:44.548 [2024-04-24 19:37:10.190997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.548 [2024-04-24 19:37:10.191059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.548 [2024-04-24 19:37:10.191068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:44.548 [2024-04-24 19:37:10.191076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:44.548 [2024-04-24 19:37:10.191083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.548 [2024-04-24 19:37:10.198070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.548 [2024-04-24 19:37:10.198103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:44.548 [2024-04-24 19:37:10.198113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.939 ms 00:22:44.548 [2024-04-24 19:37:10.198120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.548 [2024-04-24 19:37:10.198212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.548 [2024-04-24 19:37:10.198226] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:44.548 [2024-04-24 19:37:10.198234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:44.548 [2024-04-24 19:37:10.198241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.548 [2024-04-24 19:37:10.198284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.548 [2024-04-24 19:37:10.198296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:44.548 [2024-04-24 19:37:10.198303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:44.548 [2024-04-24 19:37:10.198310] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.548 [2024-04-24 19:37:10.198335] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:44.548 [2024-04-24 19:37:10.204065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.548 [2024-04-24 19:37:10.204096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:44.548 [2024-04-24 19:37:10.204106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.749 ms 00:22:44.548 [2024-04-24 19:37:10.204113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.548 [2024-04-24 19:37:10.204140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.548 [2024-04-24 19:37:10.204148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:44.548 [2024-04-24 19:37:10.204155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:44.548 [2024-04-24 19:37:10.204163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.548 [2024-04-24 19:37:10.204204] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:44.548 [2024-04-24 19:37:10.204229] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:22:44.548 [2024-04-24 19:37:10.204259] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:44.548 [2024-04-24 19:37:10.204273] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:22:44.548 [2024-04-24 19:37:10.204337] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:44.548 [2024-04-24 19:37:10.204347] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:44.548 [2024-04-24 19:37:10.204357] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:44.548 [2024-04-24 19:37:10.204367] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:44.548 [2024-04-24 19:37:10.204376] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:44.548 [2024-04-24 19:37:10.204387] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:44.548 [2024-04-24 19:37:10.204395] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:44.548 [2024-04-24 19:37:10.204402] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:44.548 [2024-04-24 19:37:10.204408] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:44.548 [2024-04-24 19:37:10.204415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.548 [2024-04-24 19:37:10.204422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:44.548 [2024-04-24 19:37:10.204430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:22:44.548 [2024-04-24 19:37:10.204436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.548 [2024-04-24 19:37:10.204489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.548 [2024-04-24 19:37:10.204498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:44.548 [2024-04-24 19:37:10.204508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:44.548 [2024-04-24 19:37:10.204515] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.548 [2024-04-24 19:37:10.204579] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:44.548 [2024-04-24 19:37:10.204607] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:44.548 [2024-04-24 19:37:10.204615] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:44.549 [2024-04-24 19:37:10.204622] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.549 [2024-04-24 19:37:10.204629] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:44.549 [2024-04-24 19:37:10.204646] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:44.549 [2024-04-24 19:37:10.204653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:44.549 [2024-04-24 19:37:10.204676] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:44.549 [2024-04-24 19:37:10.204683] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:44.549 [2024-04-24 19:37:10.204690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:44.549 [2024-04-24 19:37:10.204696] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:44.549 [2024-04-24 19:37:10.204703] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:44.549 [2024-04-24 19:37:10.204722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:44.549 [2024-04-24 19:37:10.204728] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:44.549 [2024-04-24 19:37:10.204735] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:44.549 [2024-04-24 19:37:10.204741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.549 [2024-04-24 19:37:10.204748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:44.549 [2024-04-24 19:37:10.204755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:44.549 [2024-04-24 19:37:10.204761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.549 [2024-04-24 19:37:10.204767] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:44.549 [2024-04-24 19:37:10.204774] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:44.549 [2024-04-24 19:37:10.204781] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:44.549 [2024-04-24 19:37:10.204787] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:44.549 [2024-04-24 19:37:10.204793] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:44.549 [2024-04-24 19:37:10.204800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:44.549 [2024-04-24 19:37:10.204806] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:44.549 [2024-04-24 19:37:10.204812] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:44.549 [2024-04-24 19:37:10.204818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:44.549 [2024-04-24 19:37:10.204824] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:44.549 [2024-04-24 19:37:10.204831] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:44.549 [2024-04-24 19:37:10.204837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:44.549 [2024-04-24 19:37:10.204843] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:44.549 [2024-04-24 19:37:10.204849] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:44.549 [2024-04-24 19:37:10.204855] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:44.549 [2024-04-24 19:37:10.204861] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:44.549 [2024-04-24 19:37:10.204868] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:44.549 [2024-04-24 19:37:10.204874] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:44.549 [2024-04-24 19:37:10.204880] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:44.549 [2024-04-24 19:37:10.204886] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:44.549 [2024-04-24 19:37:10.204892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:44.549 [2024-04-24 19:37:10.204898] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:44.549 [2024-04-24 19:37:10.204906] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:44.549 [2024-04-24 19:37:10.204917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:44.549 [2024-04-24 19:37:10.204928] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:44.549 [2024-04-24 19:37:10.204935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:44.549 [2024-04-24 19:37:10.204942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:44.549 [2024-04-24 19:37:10.204948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:44.549 [2024-04-24 19:37:10.204955] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:44.549 [2024-04-24 19:37:10.204961] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:44.549 [2024-04-24 19:37:10.204967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:44.549 [2024-04-24 19:37:10.204975] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:44.549 [2024-04-24 19:37:10.204984] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:44.549 [2024-04-24 19:37:10.204992] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:44.549 [2024-04-24 19:37:10.204998] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:44.549 [2024-04-24 19:37:10.205005] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:44.549 [2024-04-24 19:37:10.205012] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:44.549 [2024-04-24 19:37:10.205020] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:44.549 [2024-04-24 19:37:10.205027] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:44.549 [2024-04-24 19:37:10.205034] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:44.549 [2024-04-24 19:37:10.205040] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:44.549 [2024-04-24 19:37:10.205047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:44.549 [2024-04-24 19:37:10.205054] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:44.549 [2024-04-24 19:37:10.205061] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:44.549 [2024-04-24 19:37:10.205068] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:44.549 [2024-04-24 19:37:10.205075] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:44.549 [2024-04-24 19:37:10.205081] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:44.549 [2024-04-24 19:37:10.205089] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:44.549 [2024-04-24 19:37:10.205096] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:44.549 [2024-04-24 19:37:10.205103] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:44.549 [2024-04-24 19:37:10.205110] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:44.549 [2024-04-24 19:37:10.205116] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:44.549 [2024-04-24 19:37:10.205124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.549 [2024-04-24 19:37:10.205131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:44.549 [2024-04-24 19:37:10.205139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:22:44.549 [2024-04-24 19:37:10.205146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.808 [2024-04-24 19:37:10.230321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.808 [2024-04-24 19:37:10.230357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:44.808 [2024-04-24 19:37:10.230368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.183 ms 00:22:44.809 [2024-04-24 19:37:10.230376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.809 [2024-04-24 19:37:10.230459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.809 [2024-04-24 19:37:10.230470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:44.809 [2024-04-24 19:37:10.230478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:44.809 [2024-04-24 19:37:10.230485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.809 [2024-04-24 19:37:10.293987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.809 [2024-04-24 19:37:10.294043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:44.809 [2024-04-24 19:37:10.294055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.573 ms 00:22:44.809 [2024-04-24 19:37:10.294066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.809 [2024-04-24 19:37:10.294141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.809 [2024-04-24 19:37:10.294150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:44.809 [2024-04-24 19:37:10.294158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:44.809 [2024-04-24 19:37:10.294166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.809 [2024-04-24 19:37:10.294633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.809 [2024-04-24 19:37:10.294660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:44.809 [2024-04-24 19:37:10.294669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:22:44.809 [2024-04-24 19:37:10.294676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.809 [2024-04-24 19:37:10.294781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.809 [2024-04-24 19:37:10.294799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:44.809 [2024-04-24 19:37:10.294807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:22:44.809 [2024-04-24 19:37:10.294814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.809 [2024-04-24 19:37:10.317664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.809 [2024-04-24 19:37:10.317719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:44.809 [2024-04-24 19:37:10.317732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.869 ms 00:22:44.809 [2024-04-24 19:37:10.317741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.809 [2024-04-24 19:37:10.338575] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:44.809 [2024-04-24 19:37:10.338631] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:44.809 [2024-04-24 19:37:10.338663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.809 [2024-04-24 19:37:10.338672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:44.809 [2024-04-24 19:37:10.338699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.806 ms 00:22:44.809 [2024-04-24 19:37:10.338706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.809 [2024-04-24 19:37:10.371893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.809 [2024-04-24 19:37:10.371964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:44.809 [2024-04-24 19:37:10.371980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.180 ms 00:22:44.809 [2024-04-24 19:37:10.371989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.809 [2024-04-24 19:37:10.393549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.809 [2024-04-24 19:37:10.393605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:44.809 [2024-04-24 19:37:10.393618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.519 ms 00:22:44.809 [2024-04-24 19:37:10.393626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.809 [2024-04-24 19:37:10.413296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.809 [2024-04-24 19:37:10.413339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:44.809 [2024-04-24 19:37:10.413350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.613 ms 00:22:44.809 [2024-04-24 19:37:10.413357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.809 [2024-04-24 19:37:10.413909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.809 [2024-04-24 19:37:10.413931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:44.809 [2024-04-24 19:37:10.413940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:22:44.809 [2024-04-24 19:37:10.413948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.069 [2024-04-24 19:37:10.506977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.070 [2024-04-24 19:37:10.507039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:45.070 [2024-04-24 19:37:10.507053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.188 ms 00:22:45.070 [2024-04-24 19:37:10.507061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.070 [2024-04-24 19:37:10.519685] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:45.070 [2024-04-24 19:37:10.522728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.070 [2024-04-24 19:37:10.522755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:45.070 [2024-04-24 19:37:10.522766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.613 ms 00:22:45.070 [2024-04-24 19:37:10.522773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.070 [2024-04-24 19:37:10.522879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.070 [2024-04-24 19:37:10.522894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:45.070 [2024-04-24 19:37:10.522903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:45.070 [2024-04-24 19:37:10.522910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.070 [2024-04-24 19:37:10.524303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.070 [2024-04-24 19:37:10.524344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:45.070 [2024-04-24 19:37:10.524354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.362 ms 00:22:45.070 [2024-04-24 19:37:10.524362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.070 [2024-04-24 19:37:10.525976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.070 [2024-04-24 19:37:10.526001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:45.070 [2024-04-24 19:37:10.526013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.589 ms 00:22:45.070 [2024-04-24 19:37:10.526019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.070 [2024-04-24 19:37:10.526059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.070 [2024-04-24 19:37:10.526066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:45.070 [2024-04-24 19:37:10.526090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:45.070 [2024-04-24 19:37:10.526097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.070 [2024-04-24 19:37:10.526129] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:45.070 [2024-04-24 19:37:10.526138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.070 [2024-04-24 19:37:10.526144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:45.070 [2024-04-24 19:37:10.526151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:45.070 [2024-04-24 19:37:10.526161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.070 [2024-04-24 19:37:10.566256] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.070 [2024-04-24 19:37:10.566300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:45.070 [2024-04-24 19:37:10.566327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.155 ms 00:22:45.070 [2024-04-24 19:37:10.566336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.070 [2024-04-24 19:37:10.566416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.070 [2024-04-24 19:37:10.566430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:45.070 [2024-04-24 19:37:10.566438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:45.070 [2024-04-24 19:37:10.566446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.070 [2024-04-24 19:37:10.571872] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 403.296 ms, result 0 00:23:16.394  Copying: 29/1024 [MB] (29 MBps) Copying: 62/1024 [MB] (33 MBps) Copying: 94/1024 [MB] (31 MBps) Copying: 126/1024 [MB] (31 MBps) Copying: 158/1024 [MB] (32 MBps) Copying: 189/1024 [MB] (31 MBps) Copying: 220/1024 [MB] (30 MBps) Copying: 253/1024 [MB] (32 MBps) Copying: 286/1024 [MB] (33 MBps) Copying: 319/1024 [MB] (33 MBps) Copying: 350/1024 [MB] (30 MBps) Copying: 383/1024 [MB] (33 MBps) Copying: 415/1024 [MB] (32 MBps) Copying: 447/1024 [MB] (32 MBps) Copying: 481/1024 [MB] (33 MBps) Copying: 514/1024 [MB] (33 MBps) Copying: 547/1024 [MB] (32 MBps) Copying: 577/1024 [MB] (30 MBps) Copying: 609/1024 [MB] (31 MBps) Copying: 641/1024 [MB] (31 MBps) Copying: 674/1024 [MB] (33 MBps) Copying: 710/1024 [MB] (35 MBps) Copying: 746/1024 [MB] (35 MBps) Copying: 782/1024 [MB] (36 MBps) Copying: 819/1024 [MB] (37 MBps) Copying: 854/1024 [MB] (34 MBps) Copying: 885/1024 [MB] (31 MBps) Copying: 917/1024 [MB] (32 MBps) Copying: 950/1024 [MB] (32 MBps) Copying: 986/1024 [MB] (35 MBps) Copying: 1021/1024 [MB] (35 MBps) Copying: 1024/1024 [MB] (average 33 MBps)[2024-04-24 19:37:41.936972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.394 [2024-04-24 19:37:41.937081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:16.394 [2024-04-24 19:37:41.937114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:16.394 [2024-04-24 19:37:41.937134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.394 [2024-04-24 19:37:41.937183] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:16.394 [2024-04-24 19:37:41.949418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.394 [2024-04-24 19:37:41.949530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:16.394 [2024-04-24 19:37:41.949585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.217 ms 00:23:16.394 [2024-04-24 19:37:41.949628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.394 [2024-04-24 19:37:41.950481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.394 [2024-04-24 19:37:41.950569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:16.394 [2024-04-24 19:37:41.950601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:23:16.394 [2024-04-24 19:37:41.950627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.394 [2024-04-24 19:37:41.961133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.394 [2024-04-24 19:37:41.961230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:16.394 [2024-04-24 19:37:41.961277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.442 ms 00:23:16.394 [2024-04-24 19:37:41.961295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.394 [2024-04-24 19:37:41.970981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.394 [2024-04-24 19:37:41.971040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:16.394 [2024-04-24 19:37:41.971065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.633 ms 00:23:16.394 [2024-04-24 19:37:41.971084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.394 [2024-04-24 19:37:42.016125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.394 [2024-04-24 19:37:42.016208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:16.394 [2024-04-24 19:37:42.016225] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.031 ms 00:23:16.394 [2024-04-24 19:37:42.016234] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.394 [2024-04-24 19:37:42.042447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.394 [2024-04-24 19:37:42.042506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:16.394 [2024-04-24 19:37:42.042520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.178 ms 00:23:16.394 [2024-04-24 19:37:42.042539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.653 [2024-04-24 19:37:42.125235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.653 [2024-04-24 19:37:42.125336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:16.653 [2024-04-24 19:37:42.125355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.783 ms 00:23:16.653 [2024-04-24 19:37:42.125365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.653 [2024-04-24 19:37:42.171795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.653 [2024-04-24 19:37:42.171856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:16.653 [2024-04-24 19:37:42.171870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.492 ms 00:23:16.653 [2024-04-24 19:37:42.171879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.653 [2024-04-24 19:37:42.217397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.653 [2024-04-24 19:37:42.217466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:16.653 [2024-04-24 19:37:42.217481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.537 ms 00:23:16.653 [2024-04-24 19:37:42.217491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.653 [2024-04-24 19:37:42.262992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.653 [2024-04-24 19:37:42.263068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:16.653 [2024-04-24 19:37:42.263083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.504 ms 00:23:16.653 [2024-04-24 19:37:42.263093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.653 [2024-04-24 19:37:42.304601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.653 [2024-04-24 19:37:42.304703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:16.653 [2024-04-24 19:37:42.304718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.417 ms 00:23:16.653 [2024-04-24 19:37:42.304727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.653 [2024-04-24 19:37:42.304817] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:16.654 [2024-04-24 19:37:42.304846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:23:16.654 [2024-04-24 19:37:42.304856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.304865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.304873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.304882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.304891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.304899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.304919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.304927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.304935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.304943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.304950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.304975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.304983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.304991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.304999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:16.654 [2024-04-24 19:37:42.305808] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:16.654 [2024-04-24 19:37:42.305817] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 23b7d3ce-0cda-4f08-8b35-d9af4c3def74 00:23:16.654 [2024-04-24 19:37:42.305826] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:23:16.654 [2024-04-24 19:37:42.305834] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 20416 00:23:16.654 [2024-04-24 19:37:42.305842] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 19456 00:23:16.654 [2024-04-24 19:37:42.305852] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0493 00:23:16.654 [2024-04-24 19:37:42.305876] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:16.654 [2024-04-24 19:37:42.305886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:16.654 [2024-04-24 19:37:42.305894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:16.654 [2024-04-24 19:37:42.305901] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:16.654 [2024-04-24 19:37:42.305908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:16.654 [2024-04-24 19:37:42.305918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.654 [2024-04-24 19:37:42.305927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:16.654 [2024-04-24 19:37:42.305941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.104 ms 00:23:16.655 [2024-04-24 19:37:42.305953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.655 [2024-04-24 19:37:42.327399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.655 [2024-04-24 19:37:42.327476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:16.655 [2024-04-24 19:37:42.327500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.403 ms 00:23:16.655 [2024-04-24 19:37:42.327514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.655 [2024-04-24 19:37:42.327872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.655 [2024-04-24 19:37:42.327900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:16.655 [2024-04-24 19:37:42.327913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:23:16.655 [2024-04-24 19:37:42.327923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.913 [2024-04-24 19:37:42.386678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.913 [2024-04-24 19:37:42.386755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:16.913 [2024-04-24 19:37:42.386773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.913 [2024-04-24 19:37:42.386784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.913 [2024-04-24 19:37:42.386895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.913 [2024-04-24 19:37:42.386908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:16.913 [2024-04-24 19:37:42.386920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.913 [2024-04-24 19:37:42.386930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.913 [2024-04-24 19:37:42.387044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.913 [2024-04-24 19:37:42.387069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:16.913 [2024-04-24 19:37:42.387083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.913 [2024-04-24 19:37:42.387096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.913 [2024-04-24 19:37:42.387120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.913 [2024-04-24 19:37:42.387138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:16.913 [2024-04-24 19:37:42.387150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.913 [2024-04-24 19:37:42.387162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.913 [2024-04-24 19:37:42.521579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.913 [2024-04-24 19:37:42.521648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:16.913 [2024-04-24 19:37:42.521663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.913 [2024-04-24 19:37:42.521672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.913 [2024-04-24 19:37:42.574045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.913 [2024-04-24 19:37:42.574105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:16.913 [2024-04-24 19:37:42.574119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.913 [2024-04-24 19:37:42.574128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.913 [2024-04-24 19:37:42.574200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.913 [2024-04-24 19:37:42.574209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:16.913 [2024-04-24 19:37:42.574219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.913 [2024-04-24 19:37:42.574228] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.913 [2024-04-24 19:37:42.574262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.913 [2024-04-24 19:37:42.574271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:16.913 [2024-04-24 19:37:42.574291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.913 [2024-04-24 19:37:42.574299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.913 [2024-04-24 19:37:42.574416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.913 [2024-04-24 19:37:42.574449] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:16.913 [2024-04-24 19:37:42.574458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.914 [2024-04-24 19:37:42.574466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.914 [2024-04-24 19:37:42.574500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.914 [2024-04-24 19:37:42.574510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:16.914 [2024-04-24 19:37:42.574519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.914 [2024-04-24 19:37:42.574531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.914 [2024-04-24 19:37:42.574570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.914 [2024-04-24 19:37:42.574578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:16.914 [2024-04-24 19:37:42.574587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.914 [2024-04-24 19:37:42.574594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.914 [2024-04-24 19:37:42.574660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.914 [2024-04-24 19:37:42.574671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:16.914 [2024-04-24 19:37:42.574682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.914 [2024-04-24 19:37:42.574690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.914 [2024-04-24 19:37:42.574845] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 639.085 ms, result 0 00:23:18.809 00:23:18.809 00:23:18.809 19:37:44 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:20.709 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:20.709 19:37:46 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:20.709 19:37:46 -- ftl/restore.sh@85 -- # restore_kill 00:23:20.709 19:37:46 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:20.709 19:37:46 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:20.709 19:37:46 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:20.709 19:37:46 -- ftl/restore.sh@32 -- # killprocess 80036 00:23:20.709 19:37:46 -- common/autotest_common.sh@936 -- # '[' -z 80036 ']' 00:23:20.709 19:37:46 -- common/autotest_common.sh@940 -- # kill -0 80036 00:23:20.709 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (80036) - No such process 00:23:20.709 Process with pid 80036 is not found 00:23:20.709 19:37:46 -- common/autotest_common.sh@963 -- # echo 'Process with pid 80036 is not found' 00:23:20.709 19:37:46 -- ftl/restore.sh@33 -- # remove_shm 00:23:20.709 Remove shared memory files 00:23:20.709 19:37:46 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:20.709 19:37:46 -- ftl/common.sh@205 -- # rm -f rm -f 00:23:20.709 19:37:46 -- ftl/common.sh@206 -- # rm -f rm -f 00:23:20.709 19:37:46 -- ftl/common.sh@207 -- # rm -f rm -f 00:23:20.709 19:37:46 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:20.709 19:37:46 -- ftl/common.sh@209 -- # rm -f rm -f 00:23:20.709 00:23:20.709 real 2m45.174s 00:23:20.709 user 2m33.934s 00:23:20.709 sys 0m13.089s 00:23:20.709 19:37:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:20.709 19:37:46 -- common/autotest_common.sh@10 -- # set +x 00:23:20.709 ************************************ 00:23:20.709 END TEST ftl_restore 00:23:20.709 ************************************ 00:23:20.709 19:37:46 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:20.709 19:37:46 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:23:20.709 19:37:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:20.709 19:37:46 -- common/autotest_common.sh@10 -- # set +x 00:23:20.966 ************************************ 00:23:20.966 START TEST ftl_dirty_shutdown 00:23:20.966 ************************************ 00:23:20.966 19:37:46 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:20.966 * Looking for test storage... 00:23:20.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:20.966 19:37:46 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:20.966 19:37:46 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:20.966 19:37:46 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:20.966 19:37:46 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:20.966 19:37:46 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:20.966 19:37:46 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:20.966 19:37:46 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:20.966 19:37:46 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:20.966 19:37:46 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:20.966 19:37:46 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.966 19:37:46 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.966 19:37:46 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:20.966 19:37:46 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:20.966 19:37:46 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:20.966 19:37:46 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:20.966 19:37:46 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:20.966 19:37:46 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:20.966 19:37:46 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.966 19:37:46 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:20.966 19:37:46 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:20.966 19:37:46 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:20.966 19:37:46 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:20.966 19:37:46 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:20.966 19:37:46 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:20.966 19:37:46 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:20.966 19:37:46 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:20.967 19:37:46 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:20.967 19:37:46 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.967 19:37:46 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@45 -- # svcpid=81826 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:20.967 19:37:46 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81826 00:23:20.967 19:37:46 -- common/autotest_common.sh@817 -- # '[' -z 81826 ']' 00:23:20.967 19:37:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.967 19:37:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:20.967 19:37:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.967 19:37:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:20.967 19:37:46 -- common/autotest_common.sh@10 -- # set +x 00:23:20.967 [2024-04-24 19:37:46.565000] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:23:20.967 [2024-04-24 19:37:46.565180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81826 ] 00:23:21.224 [2024-04-24 19:37:46.748041] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.481 [2024-04-24 19:37:47.081399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.877 19:37:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:22.877 19:37:48 -- common/autotest_common.sh@850 -- # return 0 00:23:22.877 19:37:48 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:22.877 19:37:48 -- ftl/common.sh@54 -- # local name=nvme0 00:23:22.877 19:37:48 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:22.877 19:37:48 -- ftl/common.sh@56 -- # local size=103424 00:23:22.877 19:37:48 -- ftl/common.sh@59 -- # local base_bdev 00:23:22.877 19:37:48 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:22.877 19:37:48 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:22.877 19:37:48 -- ftl/common.sh@62 -- # local base_size 00:23:22.877 19:37:48 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:22.877 19:37:48 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:23:22.877 19:37:48 -- common/autotest_common.sh@1365 -- # local bdev_info 00:23:22.877 19:37:48 -- common/autotest_common.sh@1366 -- # local bs 00:23:22.877 19:37:48 -- common/autotest_common.sh@1367 -- # local nb 00:23:22.877 19:37:48 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:23.134 19:37:48 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:23:23.134 { 00:23:23.134 "name": "nvme0n1", 00:23:23.134 "aliases": [ 00:23:23.134 "3c98c886-97ab-44f9-847c-8c38f9a69168" 00:23:23.134 ], 00:23:23.134 "product_name": "NVMe disk", 00:23:23.134 "block_size": 4096, 00:23:23.134 "num_blocks": 1310720, 00:23:23.134 "uuid": "3c98c886-97ab-44f9-847c-8c38f9a69168", 00:23:23.134 "assigned_rate_limits": { 00:23:23.134 "rw_ios_per_sec": 0, 00:23:23.134 "rw_mbytes_per_sec": 0, 00:23:23.134 "r_mbytes_per_sec": 0, 00:23:23.134 "w_mbytes_per_sec": 0 00:23:23.134 }, 00:23:23.134 "claimed": true, 00:23:23.134 "claim_type": "read_many_write_one", 00:23:23.134 "zoned": false, 00:23:23.134 "supported_io_types": { 00:23:23.134 "read": true, 00:23:23.134 "write": true, 00:23:23.134 "unmap": true, 00:23:23.134 "write_zeroes": true, 00:23:23.134 "flush": true, 00:23:23.134 "reset": true, 00:23:23.134 "compare": true, 00:23:23.134 "compare_and_write": false, 00:23:23.134 "abort": true, 00:23:23.134 "nvme_admin": true, 00:23:23.134 "nvme_io": true 00:23:23.134 }, 00:23:23.134 "driver_specific": { 00:23:23.134 "nvme": [ 00:23:23.134 { 00:23:23.134 "pci_address": "0000:00:11.0", 00:23:23.134 "trid": { 00:23:23.134 "trtype": "PCIe", 00:23:23.134 "traddr": "0000:00:11.0" 00:23:23.134 }, 00:23:23.134 "ctrlr_data": { 00:23:23.134 "cntlid": 0, 00:23:23.134 "vendor_id": "0x1b36", 00:23:23.134 "model_number": "QEMU NVMe Ctrl", 00:23:23.134 "serial_number": "12341", 00:23:23.134 "firmware_revision": "8.0.0", 00:23:23.134 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:23.134 "oacs": { 00:23:23.134 "security": 0, 00:23:23.134 "format": 1, 00:23:23.134 "firmware": 0, 00:23:23.134 "ns_manage": 1 00:23:23.134 }, 00:23:23.134 "multi_ctrlr": false, 00:23:23.134 "ana_reporting": false 00:23:23.134 }, 00:23:23.134 "vs": { 00:23:23.134 "nvme_version": "1.4" 00:23:23.134 }, 00:23:23.134 "ns_data": { 00:23:23.134 "id": 1, 00:23:23.134 "can_share": false 00:23:23.134 } 00:23:23.134 } 00:23:23.134 ], 00:23:23.134 "mp_policy": "active_passive" 00:23:23.134 } 00:23:23.134 } 00:23:23.134 ]' 00:23:23.134 19:37:48 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:23:23.134 19:37:48 -- common/autotest_common.sh@1369 -- # bs=4096 00:23:23.134 19:37:48 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:23:23.134 19:37:48 -- common/autotest_common.sh@1370 -- # nb=1310720 00:23:23.134 19:37:48 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:23:23.134 19:37:48 -- common/autotest_common.sh@1374 -- # echo 5120 00:23:23.134 19:37:48 -- ftl/common.sh@63 -- # base_size=5120 00:23:23.134 19:37:48 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:23.134 19:37:48 -- ftl/common.sh@67 -- # clear_lvols 00:23:23.134 19:37:48 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:23.134 19:37:48 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:23.393 19:37:48 -- ftl/common.sh@28 -- # stores=c2224149-2a40-437a-8bdd-29384eb29570 00:23:23.393 19:37:48 -- ftl/common.sh@29 -- # for lvs in $stores 00:23:23.393 19:37:48 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c2224149-2a40-437a-8bdd-29384eb29570 00:23:23.961 19:37:49 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:23.961 19:37:49 -- ftl/common.sh@68 -- # lvs=b7773cbf-4559-4cde-a702-b8e5ae74aadc 00:23:23.961 19:37:49 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b7773cbf-4559-4cde-a702-b8e5ae74aadc 00:23:24.219 19:37:49 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=3fa6c996-944d-45d3-8ad4-64f187af3489 00:23:24.219 19:37:49 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:24.219 19:37:49 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3fa6c996-944d-45d3-8ad4-64f187af3489 00:23:24.219 19:37:49 -- ftl/common.sh@35 -- # local name=nvc0 00:23:24.219 19:37:49 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:24.219 19:37:49 -- ftl/common.sh@37 -- # local base_bdev=3fa6c996-944d-45d3-8ad4-64f187af3489 00:23:24.219 19:37:49 -- ftl/common.sh@38 -- # local cache_size= 00:23:24.219 19:37:49 -- ftl/common.sh@41 -- # get_bdev_size 3fa6c996-944d-45d3-8ad4-64f187af3489 00:23:24.219 19:37:49 -- common/autotest_common.sh@1364 -- # local bdev_name=3fa6c996-944d-45d3-8ad4-64f187af3489 00:23:24.219 19:37:49 -- common/autotest_common.sh@1365 -- # local bdev_info 00:23:24.219 19:37:49 -- common/autotest_common.sh@1366 -- # local bs 00:23:24.219 19:37:49 -- common/autotest_common.sh@1367 -- # local nb 00:23:24.219 19:37:49 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3fa6c996-944d-45d3-8ad4-64f187af3489 00:23:24.479 19:37:49 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:23:24.479 { 00:23:24.479 "name": "3fa6c996-944d-45d3-8ad4-64f187af3489", 00:23:24.479 "aliases": [ 00:23:24.479 "lvs/nvme0n1p0" 00:23:24.479 ], 00:23:24.479 "product_name": "Logical Volume", 00:23:24.479 "block_size": 4096, 00:23:24.479 "num_blocks": 26476544, 00:23:24.479 "uuid": "3fa6c996-944d-45d3-8ad4-64f187af3489", 00:23:24.479 "assigned_rate_limits": { 00:23:24.479 "rw_ios_per_sec": 0, 00:23:24.479 "rw_mbytes_per_sec": 0, 00:23:24.479 "r_mbytes_per_sec": 0, 00:23:24.479 "w_mbytes_per_sec": 0 00:23:24.479 }, 00:23:24.479 "claimed": false, 00:23:24.479 "zoned": false, 00:23:24.479 "supported_io_types": { 00:23:24.479 "read": true, 00:23:24.479 "write": true, 00:23:24.479 "unmap": true, 00:23:24.479 "write_zeroes": true, 00:23:24.479 "flush": false, 00:23:24.479 "reset": true, 00:23:24.479 "compare": false, 00:23:24.479 "compare_and_write": false, 00:23:24.479 "abort": false, 00:23:24.479 "nvme_admin": false, 00:23:24.479 "nvme_io": false 00:23:24.479 }, 00:23:24.479 "driver_specific": { 00:23:24.479 "lvol": { 00:23:24.479 "lvol_store_uuid": "b7773cbf-4559-4cde-a702-b8e5ae74aadc", 00:23:24.479 "base_bdev": "nvme0n1", 00:23:24.479 "thin_provision": true, 00:23:24.479 "snapshot": false, 00:23:24.479 "clone": false, 00:23:24.479 "esnap_clone": false 00:23:24.479 } 00:23:24.479 } 00:23:24.479 } 00:23:24.479 ]' 00:23:24.479 19:37:49 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:23:24.479 19:37:49 -- common/autotest_common.sh@1369 -- # bs=4096 00:23:24.479 19:37:50 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:23:24.479 19:37:50 -- common/autotest_common.sh@1370 -- # nb=26476544 00:23:24.479 19:37:50 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:23:24.479 19:37:50 -- common/autotest_common.sh@1374 -- # echo 103424 00:23:24.479 19:37:50 -- ftl/common.sh@41 -- # local base_size=5171 00:23:24.479 19:37:50 -- ftl/common.sh@44 -- # local nvc_bdev 00:23:24.479 19:37:50 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:24.738 19:37:50 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:24.738 19:37:50 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:24.738 19:37:50 -- ftl/common.sh@48 -- # get_bdev_size 3fa6c996-944d-45d3-8ad4-64f187af3489 00:23:24.738 19:37:50 -- common/autotest_common.sh@1364 -- # local bdev_name=3fa6c996-944d-45d3-8ad4-64f187af3489 00:23:24.738 19:37:50 -- common/autotest_common.sh@1365 -- # local bdev_info 00:23:24.738 19:37:50 -- common/autotest_common.sh@1366 -- # local bs 00:23:24.738 19:37:50 -- common/autotest_common.sh@1367 -- # local nb 00:23:24.738 19:37:50 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3fa6c996-944d-45d3-8ad4-64f187af3489 00:23:24.998 19:37:50 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:23:24.998 { 00:23:24.998 "name": "3fa6c996-944d-45d3-8ad4-64f187af3489", 00:23:24.998 "aliases": [ 00:23:24.998 "lvs/nvme0n1p0" 00:23:24.998 ], 00:23:24.998 "product_name": "Logical Volume", 00:23:24.998 "block_size": 4096, 00:23:24.998 "num_blocks": 26476544, 00:23:24.998 "uuid": "3fa6c996-944d-45d3-8ad4-64f187af3489", 00:23:24.998 "assigned_rate_limits": { 00:23:24.998 "rw_ios_per_sec": 0, 00:23:24.998 "rw_mbytes_per_sec": 0, 00:23:24.998 "r_mbytes_per_sec": 0, 00:23:24.998 "w_mbytes_per_sec": 0 00:23:24.998 }, 00:23:24.998 "claimed": false, 00:23:24.998 "zoned": false, 00:23:24.998 "supported_io_types": { 00:23:24.998 "read": true, 00:23:24.998 "write": true, 00:23:24.999 "unmap": true, 00:23:24.999 "write_zeroes": true, 00:23:24.999 "flush": false, 00:23:24.999 "reset": true, 00:23:24.999 "compare": false, 00:23:24.999 "compare_and_write": false, 00:23:24.999 "abort": false, 00:23:24.999 "nvme_admin": false, 00:23:24.999 "nvme_io": false 00:23:24.999 }, 00:23:24.999 "driver_specific": { 00:23:24.999 "lvol": { 00:23:24.999 "lvol_store_uuid": "b7773cbf-4559-4cde-a702-b8e5ae74aadc", 00:23:24.999 "base_bdev": "nvme0n1", 00:23:24.999 "thin_provision": true, 00:23:24.999 "snapshot": false, 00:23:24.999 "clone": false, 00:23:24.999 "esnap_clone": false 00:23:24.999 } 00:23:24.999 } 00:23:24.999 } 00:23:24.999 ]' 00:23:24.999 19:37:50 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:23:24.999 19:37:50 -- common/autotest_common.sh@1369 -- # bs=4096 00:23:24.999 19:37:50 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:23:24.999 19:37:50 -- common/autotest_common.sh@1370 -- # nb=26476544 00:23:24.999 19:37:50 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:23:24.999 19:37:50 -- common/autotest_common.sh@1374 -- # echo 103424 00:23:24.999 19:37:50 -- ftl/common.sh@48 -- # cache_size=5171 00:23:24.999 19:37:50 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:25.259 19:37:50 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:25.259 19:37:50 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 3fa6c996-944d-45d3-8ad4-64f187af3489 00:23:25.259 19:37:50 -- common/autotest_common.sh@1364 -- # local bdev_name=3fa6c996-944d-45d3-8ad4-64f187af3489 00:23:25.259 19:37:50 -- common/autotest_common.sh@1365 -- # local bdev_info 00:23:25.259 19:37:50 -- common/autotest_common.sh@1366 -- # local bs 00:23:25.259 19:37:50 -- common/autotest_common.sh@1367 -- # local nb 00:23:25.259 19:37:50 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3fa6c996-944d-45d3-8ad4-64f187af3489 00:23:25.517 19:37:51 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:23:25.517 { 00:23:25.517 "name": "3fa6c996-944d-45d3-8ad4-64f187af3489", 00:23:25.517 "aliases": [ 00:23:25.517 "lvs/nvme0n1p0" 00:23:25.517 ], 00:23:25.517 "product_name": "Logical Volume", 00:23:25.517 "block_size": 4096, 00:23:25.517 "num_blocks": 26476544, 00:23:25.517 "uuid": "3fa6c996-944d-45d3-8ad4-64f187af3489", 00:23:25.517 "assigned_rate_limits": { 00:23:25.517 "rw_ios_per_sec": 0, 00:23:25.517 "rw_mbytes_per_sec": 0, 00:23:25.517 "r_mbytes_per_sec": 0, 00:23:25.517 "w_mbytes_per_sec": 0 00:23:25.517 }, 00:23:25.517 "claimed": false, 00:23:25.517 "zoned": false, 00:23:25.517 "supported_io_types": { 00:23:25.517 "read": true, 00:23:25.517 "write": true, 00:23:25.517 "unmap": true, 00:23:25.517 "write_zeroes": true, 00:23:25.517 "flush": false, 00:23:25.517 "reset": true, 00:23:25.517 "compare": false, 00:23:25.517 "compare_and_write": false, 00:23:25.517 "abort": false, 00:23:25.517 "nvme_admin": false, 00:23:25.517 "nvme_io": false 00:23:25.517 }, 00:23:25.517 "driver_specific": { 00:23:25.517 "lvol": { 00:23:25.517 "lvol_store_uuid": "b7773cbf-4559-4cde-a702-b8e5ae74aadc", 00:23:25.517 "base_bdev": "nvme0n1", 00:23:25.517 "thin_provision": true, 00:23:25.517 "snapshot": false, 00:23:25.517 "clone": false, 00:23:25.517 "esnap_clone": false 00:23:25.517 } 00:23:25.517 } 00:23:25.517 } 00:23:25.517 ]' 00:23:25.517 19:37:51 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:23:25.517 19:37:51 -- common/autotest_common.sh@1369 -- # bs=4096 00:23:25.517 19:37:51 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:23:25.517 19:37:51 -- common/autotest_common.sh@1370 -- # nb=26476544 00:23:25.517 19:37:51 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:23:25.517 19:37:51 -- common/autotest_common.sh@1374 -- # echo 103424 00:23:25.517 19:37:51 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:25.517 19:37:51 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3fa6c996-944d-45d3-8ad4-64f187af3489 --l2p_dram_limit 10' 00:23:25.517 19:37:51 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:25.517 19:37:51 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:25.517 19:37:51 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:25.517 19:37:51 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3fa6c996-944d-45d3-8ad4-64f187af3489 --l2p_dram_limit 10 -c nvc0n1p0 00:23:25.776 [2024-04-24 19:37:51.339416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.776 [2024-04-24 19:37:51.339505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:25.776 [2024-04-24 19:37:51.339529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:25.776 [2024-04-24 19:37:51.339539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.776 [2024-04-24 19:37:51.339624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.776 [2024-04-24 19:37:51.339648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:25.776 [2024-04-24 19:37:51.339665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:23:25.776 [2024-04-24 19:37:51.339673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.776 [2024-04-24 19:37:51.339698] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:25.776 [2024-04-24 19:37:51.343443] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:25.776 [2024-04-24 19:37:51.343493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.776 [2024-04-24 19:37:51.343504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:25.776 [2024-04-24 19:37:51.343522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.807 ms 00:23:25.776 [2024-04-24 19:37:51.343534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.776 [2024-04-24 19:37:51.343628] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9e036e91-3551-4163-9401-75fe915f2698 00:23:25.776 [2024-04-24 19:37:51.346203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.776 [2024-04-24 19:37:51.346237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:25.776 [2024-04-24 19:37:51.346247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:25.776 [2024-04-24 19:37:51.346257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.776 [2024-04-24 19:37:51.360907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.776 [2024-04-24 19:37:51.360959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:25.776 [2024-04-24 19:37:51.360972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.617 ms 00:23:25.776 [2024-04-24 19:37:51.360984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.776 [2024-04-24 19:37:51.361119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.776 [2024-04-24 19:37:51.361141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:25.776 [2024-04-24 19:37:51.361152] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:25.776 [2024-04-24 19:37:51.361163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.776 [2024-04-24 19:37:51.361267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.776 [2024-04-24 19:37:51.361293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:25.776 [2024-04-24 19:37:51.361304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:25.776 [2024-04-24 19:37:51.361316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.776 [2024-04-24 19:37:51.361346] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:25.776 [2024-04-24 19:37:51.369955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.776 [2024-04-24 19:37:51.369993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:25.776 [2024-04-24 19:37:51.370007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.632 ms 00:23:25.776 [2024-04-24 19:37:51.370017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.776 [2024-04-24 19:37:51.370059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.776 [2024-04-24 19:37:51.370069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:25.776 [2024-04-24 19:37:51.370081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:25.776 [2024-04-24 19:37:51.370090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.776 [2024-04-24 19:37:51.370135] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:25.776 [2024-04-24 19:37:51.370264] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:25.776 [2024-04-24 19:37:51.370281] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:25.776 [2024-04-24 19:37:51.370310] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:25.776 [2024-04-24 19:37:51.370327] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:25.776 [2024-04-24 19:37:51.370341] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:25.776 [2024-04-24 19:37:51.370354] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:25.776 [2024-04-24 19:37:51.370364] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:25.776 [2024-04-24 19:37:51.370375] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:25.776 [2024-04-24 19:37:51.370385] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:25.776 [2024-04-24 19:37:51.370415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.776 [2024-04-24 19:37:51.370425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:25.776 [2024-04-24 19:37:51.370437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:23:25.776 [2024-04-24 19:37:51.370446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.776 [2024-04-24 19:37:51.370526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.776 [2024-04-24 19:37:51.370541] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:25.776 [2024-04-24 19:37:51.370556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:25.776 [2024-04-24 19:37:51.370565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.776 [2024-04-24 19:37:51.370662] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:25.776 [2024-04-24 19:37:51.370678] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:25.776 [2024-04-24 19:37:51.370694] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:25.776 [2024-04-24 19:37:51.370703] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.776 [2024-04-24 19:37:51.370715] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:25.776 [2024-04-24 19:37:51.370725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:25.776 [2024-04-24 19:37:51.370736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:25.776 [2024-04-24 19:37:51.370744] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:25.776 [2024-04-24 19:37:51.370755] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:25.776 [2024-04-24 19:37:51.370763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:25.776 [2024-04-24 19:37:51.370773] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:25.776 [2024-04-24 19:37:51.370781] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:25.776 [2024-04-24 19:37:51.370792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:25.776 [2024-04-24 19:37:51.370800] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:25.776 [2024-04-24 19:37:51.370832] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:25.776 [2024-04-24 19:37:51.370842] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.776 [2024-04-24 19:37:51.370854] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:25.776 [2024-04-24 19:37:51.370863] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:25.776 [2024-04-24 19:37:51.370876] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.776 [2024-04-24 19:37:51.370884] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:25.776 [2024-04-24 19:37:51.370895] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:25.776 [2024-04-24 19:37:51.370903] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:25.776 [2024-04-24 19:37:51.370914] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:25.776 [2024-04-24 19:37:51.370922] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:25.776 [2024-04-24 19:37:51.370932] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:25.776 [2024-04-24 19:37:51.370940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:25.776 [2024-04-24 19:37:51.370951] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:25.776 [2024-04-24 19:37:51.370959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:25.776 [2024-04-24 19:37:51.370969] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:25.776 [2024-04-24 19:37:51.370977] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:25.777 [2024-04-24 19:37:51.370987] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:25.777 [2024-04-24 19:37:51.370995] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:25.777 [2024-04-24 19:37:51.371005] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:25.777 [2024-04-24 19:37:51.371012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:25.777 [2024-04-24 19:37:51.371025] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:25.777 [2024-04-24 19:37:51.371033] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:25.777 [2024-04-24 19:37:51.371043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:25.777 [2024-04-24 19:37:51.371051] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:25.777 [2024-04-24 19:37:51.371062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:25.777 [2024-04-24 19:37:51.371069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:25.777 [2024-04-24 19:37:51.371079] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:25.777 [2024-04-24 19:37:51.371088] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:25.777 [2024-04-24 19:37:51.371100] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:25.777 [2024-04-24 19:37:51.371111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:25.777 [2024-04-24 19:37:51.371123] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:25.777 [2024-04-24 19:37:51.371132] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:25.777 [2024-04-24 19:37:51.371142] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:25.777 [2024-04-24 19:37:51.371151] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:25.777 [2024-04-24 19:37:51.371161] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:25.777 [2024-04-24 19:37:51.371170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:25.777 [2024-04-24 19:37:51.371185] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:25.777 [2024-04-24 19:37:51.371196] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:25.777 [2024-04-24 19:37:51.371209] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:25.777 [2024-04-24 19:37:51.371218] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:25.777 [2024-04-24 19:37:51.371229] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:25.777 [2024-04-24 19:37:51.371247] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:25.777 [2024-04-24 19:37:51.371258] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:25.777 [2024-04-24 19:37:51.371267] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:25.777 [2024-04-24 19:37:51.371279] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:25.777 [2024-04-24 19:37:51.371288] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:25.777 [2024-04-24 19:37:51.371299] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:25.777 [2024-04-24 19:37:51.371309] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:25.777 [2024-04-24 19:37:51.371319] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:25.777 [2024-04-24 19:37:51.371328] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:25.777 [2024-04-24 19:37:51.371339] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:25.777 [2024-04-24 19:37:51.371348] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:25.777 [2024-04-24 19:37:51.371364] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:25.777 [2024-04-24 19:37:51.371374] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:25.777 [2024-04-24 19:37:51.371385] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:25.777 [2024-04-24 19:37:51.371394] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:25.777 [2024-04-24 19:37:51.371406] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:25.777 [2024-04-24 19:37:51.371416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.777 [2024-04-24 19:37:51.371428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:25.777 [2024-04-24 19:37:51.371437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:23:25.777 [2024-04-24 19:37:51.371449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.777 [2024-04-24 19:37:51.404270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.777 [2024-04-24 19:37:51.404339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:25.777 [2024-04-24 19:37:51.404354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.814 ms 00:23:25.777 [2024-04-24 19:37:51.404366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:25.777 [2024-04-24 19:37:51.404472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:25.777 [2024-04-24 19:37:51.404486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:25.777 [2024-04-24 19:37:51.404496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:25.777 [2024-04-24 19:37:51.404508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.035 [2024-04-24 19:37:51.468251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.035 [2024-04-24 19:37:51.468331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:26.035 [2024-04-24 19:37:51.468347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.804 ms 00:23:26.035 [2024-04-24 19:37:51.468359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.035 [2024-04-24 19:37:51.468426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.035 [2024-04-24 19:37:51.468439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:26.035 [2024-04-24 19:37:51.468449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:26.035 [2024-04-24 19:37:51.468466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.035 [2024-04-24 19:37:51.469374] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.035 [2024-04-24 19:37:51.469403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:26.035 [2024-04-24 19:37:51.469415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:23:26.035 [2024-04-24 19:37:51.469427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.035 [2024-04-24 19:37:51.469562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.035 [2024-04-24 19:37:51.469587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:26.035 [2024-04-24 19:37:51.469597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:23:26.035 [2024-04-24 19:37:51.469612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.035 [2024-04-24 19:37:51.501353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.035 [2024-04-24 19:37:51.501426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:26.035 [2024-04-24 19:37:51.501442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.751 ms 00:23:26.035 [2024-04-24 19:37:51.501457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.035 [2024-04-24 19:37:51.521969] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:26.035 [2024-04-24 19:37:51.527936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.035 [2024-04-24 19:37:51.527990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:26.035 [2024-04-24 19:37:51.528009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.364 ms 00:23:26.035 [2024-04-24 19:37:51.528020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.035 [2024-04-24 19:37:51.626321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:26.035 [2024-04-24 19:37:51.626388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:26.035 [2024-04-24 19:37:51.626409] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.412 ms 00:23:26.035 [2024-04-24 19:37:51.626419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:26.035 [2024-04-24 19:37:51.626493] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:23:26.035 [2024-04-24 19:37:51.626507] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:23:30.219 [2024-04-24 19:37:55.475452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.219 [2024-04-24 19:37:55.475551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:30.219 [2024-04-24 19:37:55.475574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3856.359 ms 00:23:30.219 [2024-04-24 19:37:55.475584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.219 [2024-04-24 19:37:55.475864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.219 [2024-04-24 19:37:55.475888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:30.219 [2024-04-24 19:37:55.475902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:23:30.219 [2024-04-24 19:37:55.475911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.219 [2024-04-24 19:37:55.527191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.219 [2024-04-24 19:37:55.527300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:30.219 [2024-04-24 19:37:55.527320] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.274 ms 00:23:30.219 [2024-04-24 19:37:55.527331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.219 [2024-04-24 19:37:55.576467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.219 [2024-04-24 19:37:55.576561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:30.219 [2024-04-24 19:37:55.576583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.147 ms 00:23:30.219 [2024-04-24 19:37:55.576592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.219 [2024-04-24 19:37:55.577233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.219 [2024-04-24 19:37:55.577261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:30.219 [2024-04-24 19:37:55.577276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:23:30.219 [2024-04-24 19:37:55.577289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.219 [2024-04-24 19:37:55.698118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.219 [2024-04-24 19:37:55.698183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:30.219 [2024-04-24 19:37:55.698203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.963 ms 00:23:30.219 [2024-04-24 19:37:55.698233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.219 [2024-04-24 19:37:55.746830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.219 [2024-04-24 19:37:55.746950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:30.219 [2024-04-24 19:37:55.746993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.593 ms 00:23:30.219 [2024-04-24 19:37:55.747008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.219 [2024-04-24 19:37:55.749499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.219 [2024-04-24 19:37:55.749535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:30.219 [2024-04-24 19:37:55.749550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.400 ms 00:23:30.219 [2024-04-24 19:37:55.749559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.219 [2024-04-24 19:37:55.798220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.219 [2024-04-24 19:37:55.798326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:30.219 [2024-04-24 19:37:55.798346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.625 ms 00:23:30.219 [2024-04-24 19:37:55.798357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.219 [2024-04-24 19:37:55.798471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.219 [2024-04-24 19:37:55.798483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:30.219 [2024-04-24 19:37:55.798497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:30.219 [2024-04-24 19:37:55.798511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.219 [2024-04-24 19:37:55.798686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.219 [2024-04-24 19:37:55.798705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:30.219 [2024-04-24 19:37:55.798719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:30.219 [2024-04-24 19:37:55.798727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.219 [2024-04-24 19:37:55.800729] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4468.977 ms, result 0 00:23:30.219 { 00:23:30.219 "name": "ftl0", 00:23:30.219 "uuid": "9e036e91-3551-4163-9401-75fe915f2698" 00:23:30.219 } 00:23:30.219 19:37:55 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:30.219 19:37:55 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:30.477 19:37:56 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:30.477 19:37:56 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:30.477 19:37:56 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:30.736 /dev/nbd0 00:23:30.736 19:37:56 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:30.736 19:37:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:30.736 19:37:56 -- common/autotest_common.sh@855 -- # local i 00:23:30.736 19:37:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:30.736 19:37:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:30.736 19:37:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:30.736 19:37:56 -- common/autotest_common.sh@859 -- # break 00:23:30.736 19:37:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:30.736 19:37:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:30.736 19:37:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:30.736 1+0 records in 00:23:30.736 1+0 records out 00:23:30.736 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493271 s, 8.3 MB/s 00:23:30.736 19:37:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:30.736 19:37:56 -- common/autotest_common.sh@872 -- # size=4096 00:23:30.736 19:37:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:30.736 19:37:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:30.736 19:37:56 -- common/autotest_common.sh@875 -- # return 0 00:23:30.736 19:37:56 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:30.997 [2024-04-24 19:37:56.441367] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:23:30.997 [2024-04-24 19:37:56.442085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81985 ] 00:23:30.997 [2024-04-24 19:37:56.617803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.568 [2024-04-24 19:37:56.950534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.508  Copying: 193/1024 [MB] (193 MBps) Copying: 388/1024 [MB] (194 MBps) Copying: 581/1024 [MB] (193 MBps) Copying: 773/1024 [MB] (191 MBps) Copying: 967/1024 [MB] (194 MBps) Copying: 1024/1024 [MB] (average 193 MBps) 00:23:38.508 00:23:38.508 19:38:04 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:40.512 19:38:06 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:40.771 [2024-04-24 19:38:06.229514] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:23:40.771 [2024-04-24 19:38:06.229700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82089 ] 00:23:40.771 [2024-04-24 19:38:06.400543] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.030 [2024-04-24 19:38:06.646004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.288  Copying: 19/1024 [MB] (19 MBps) Copying: 40/1024 [MB] (20 MBps) Copying: 64/1024 [MB] (23 MBps) Copying: 86/1024 [MB] (22 MBps) Copying: 109/1024 [MB] (23 MBps) Copying: 130/1024 [MB] (20 MBps) Copying: 150/1024 [MB] (20 MBps) Copying: 172/1024 [MB] (21 MBps) Copying: 195/1024 [MB] (23 MBps) Copying: 205/1024 [MB] (10 MBps) Copying: 226/1024 [MB] (21 MBps) Copying: 245/1024 [MB] (18 MBps) Copying: 264/1024 [MB] (18 MBps) Copying: 286/1024 [MB] (22 MBps) Copying: 309/1024 [MB] (23 MBps) Copying: 332/1024 [MB] (22 MBps) Copying: 355/1024 [MB] (22 MBps) Copying: 377/1024 [MB] (22 MBps) Copying: 400/1024 [MB] (23 MBps) Copying: 423/1024 [MB] (22 MBps) Copying: 447/1024 [MB] (23 MBps) Copying: 470/1024 [MB] (23 MBps) Copying: 493/1024 [MB] (22 MBps) Copying: 515/1024 [MB] (22 MBps) Copying: 538/1024 [MB] (22 MBps) Copying: 561/1024 [MB] (23 MBps) Copying: 584/1024 [MB] (22 MBps) Copying: 608/1024 [MB] (23 MBps) Copying: 632/1024 [MB] (24 MBps) Copying: 656/1024 [MB] (23 MBps) Copying: 679/1024 [MB] (22 MBps) Copying: 701/1024 [MB] (21 MBps) Copying: 722/1024 [MB] (21 MBps) Copying: 744/1024 [MB] (21 MBps) Copying: 767/1024 [MB] (23 MBps) Copying: 790/1024 [MB] (22 MBps) Copying: 811/1024 [MB] (21 MBps) Copying: 833/1024 [MB] (22 MBps) Copying: 856/1024 [MB] (22 MBps) Copying: 878/1024 [MB] (22 MBps) Copying: 900/1024 [MB] (21 MBps) Copying: 922/1024 [MB] (21 MBps) Copying: 944/1024 [MB] (21 MBps) Copying: 967/1024 [MB] (23 MBps) Copying: 989/1024 [MB] (22 MBps) Copying: 1012/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 22 MBps) 00:24:29.288 00:24:29.288 19:38:54 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:29.288 19:38:54 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:29.288 19:38:54 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:29.605 [2024-04-24 19:38:55.118336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.605 [2024-04-24 19:38:55.118394] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:29.605 [2024-04-24 19:38:55.118418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:29.605 [2024-04-24 19:38:55.118432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.605 [2024-04-24 19:38:55.118457] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:29.605 [2024-04-24 19:38:55.122010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.605 [2024-04-24 19:38:55.122044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:29.605 [2024-04-24 19:38:55.122056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.540 ms 00:24:29.605 [2024-04-24 19:38:55.122064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.605 [2024-04-24 19:38:55.123941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.605 [2024-04-24 19:38:55.124002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:29.605 [2024-04-24 19:38:55.124018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.847 ms 00:24:29.605 [2024-04-24 19:38:55.124027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.605 [2024-04-24 19:38:55.141845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.605 [2024-04-24 19:38:55.141881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:29.605 [2024-04-24 19:38:55.141893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.823 ms 00:24:29.605 [2024-04-24 19:38:55.141901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.605 [2024-04-24 19:38:55.146913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.605 [2024-04-24 19:38:55.146940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:29.605 [2024-04-24 19:38:55.146972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.984 ms 00:24:29.605 [2024-04-24 19:38:55.146979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.605 [2024-04-24 19:38:55.185678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.605 [2024-04-24 19:38:55.185719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:29.605 [2024-04-24 19:38:55.185734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.697 ms 00:24:29.605 [2024-04-24 19:38:55.185742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.605 [2024-04-24 19:38:55.208390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.605 [2024-04-24 19:38:55.208431] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:29.605 [2024-04-24 19:38:55.208445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.650 ms 00:24:29.605 [2024-04-24 19:38:55.208453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.605 [2024-04-24 19:38:55.208600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.605 [2024-04-24 19:38:55.208612] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:29.605 [2024-04-24 19:38:55.208622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:29.605 [2024-04-24 19:38:55.208630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.605 [2024-04-24 19:38:55.247799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.605 [2024-04-24 19:38:55.247845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:29.605 [2024-04-24 19:38:55.247862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.206 ms 00:24:29.605 [2024-04-24 19:38:55.247869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.865 [2024-04-24 19:38:55.284361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.866 [2024-04-24 19:38:55.284398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:29.866 [2024-04-24 19:38:55.284428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.515 ms 00:24:29.866 [2024-04-24 19:38:55.284435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.866 [2024-04-24 19:38:55.321124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.866 [2024-04-24 19:38:55.321177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:29.866 [2024-04-24 19:38:55.321207] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.717 ms 00:24:29.866 [2024-04-24 19:38:55.321215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.866 [2024-04-24 19:38:55.359081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.866 [2024-04-24 19:38:55.359124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:29.866 [2024-04-24 19:38:55.359137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.835 ms 00:24:29.866 [2024-04-24 19:38:55.359144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.866 [2024-04-24 19:38:55.359185] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:29.866 [2024-04-24 19:38:55.359201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:29.866 [2024-04-24 19:38:55.359969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.359979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.359987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.359998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:29.867 [2024-04-24 19:38:55.360187] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:29.867 [2024-04-24 19:38:55.360201] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e036e91-3551-4163-9401-75fe915f2698 00:24:29.867 [2024-04-24 19:38:55.360210] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:29.867 [2024-04-24 19:38:55.360219] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:29.867 [2024-04-24 19:38:55.360227] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:29.867 [2024-04-24 19:38:55.360238] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:29.867 [2024-04-24 19:38:55.360247] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:29.867 [2024-04-24 19:38:55.360257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:29.867 [2024-04-24 19:38:55.360265] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:29.867 [2024-04-24 19:38:55.360275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:29.867 [2024-04-24 19:38:55.360282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:29.867 [2024-04-24 19:38:55.360292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.867 [2024-04-24 19:38:55.360300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:29.867 [2024-04-24 19:38:55.360313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:24:29.867 [2024-04-24 19:38:55.360322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.867 [2024-04-24 19:38:55.381310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.867 [2024-04-24 19:38:55.381351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:29.867 [2024-04-24 19:38:55.381363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.973 ms 00:24:29.867 [2024-04-24 19:38:55.381371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.867 [2024-04-24 19:38:55.381628] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.867 [2024-04-24 19:38:55.381639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:29.867 [2024-04-24 19:38:55.381665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:24:29.867 [2024-04-24 19:38:55.381674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.867 [2024-04-24 19:38:55.450116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.867 [2024-04-24 19:38:55.450171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:29.867 [2024-04-24 19:38:55.450186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.867 [2024-04-24 19:38:55.450210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.867 [2024-04-24 19:38:55.450292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.867 [2024-04-24 19:38:55.450301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:29.867 [2024-04-24 19:38:55.450316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.867 [2024-04-24 19:38:55.450324] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.867 [2024-04-24 19:38:55.450421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.867 [2024-04-24 19:38:55.450432] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:29.867 [2024-04-24 19:38:55.450442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.867 [2024-04-24 19:38:55.450450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.867 [2024-04-24 19:38:55.450470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.867 [2024-04-24 19:38:55.450477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:29.867 [2024-04-24 19:38:55.450486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.867 [2024-04-24 19:38:55.450495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.127 [2024-04-24 19:38:55.572075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.127 [2024-04-24 19:38:55.572129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:30.127 [2024-04-24 19:38:55.572145] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.127 [2024-04-24 19:38:55.572152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.127 [2024-04-24 19:38:55.620371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.127 [2024-04-24 19:38:55.620425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:30.127 [2024-04-24 19:38:55.620445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.127 [2024-04-24 19:38:55.620456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.127 [2024-04-24 19:38:55.620551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.127 [2024-04-24 19:38:55.620562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:30.127 [2024-04-24 19:38:55.620572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.127 [2024-04-24 19:38:55.620581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.127 [2024-04-24 19:38:55.620629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.127 [2024-04-24 19:38:55.620656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:30.127 [2024-04-24 19:38:55.620667] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.127 [2024-04-24 19:38:55.620675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.127 [2024-04-24 19:38:55.620905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.127 [2024-04-24 19:38:55.620928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:30.127 [2024-04-24 19:38:55.620940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.127 [2024-04-24 19:38:55.620948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.127 [2024-04-24 19:38:55.621005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.127 [2024-04-24 19:38:55.621019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:30.127 [2024-04-24 19:38:55.621033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.127 [2024-04-24 19:38:55.621043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.127 [2024-04-24 19:38:55.621091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.127 [2024-04-24 19:38:55.621101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:30.127 [2024-04-24 19:38:55.621112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.127 [2024-04-24 19:38:55.621121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.127 [2024-04-24 19:38:55.621173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:30.127 [2024-04-24 19:38:55.621183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:30.127 [2024-04-24 19:38:55.621193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:30.127 [2024-04-24 19:38:55.621202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.127 [2024-04-24 19:38:55.621351] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 503.944 ms, result 0 00:24:30.127 true 00:24:30.127 19:38:55 -- ftl/dirty_shutdown.sh@83 -- # kill -9 81826 00:24:30.127 19:38:55 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81826 00:24:30.127 19:38:55 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:30.127 [2024-04-24 19:38:55.740467] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:24:30.127 [2024-04-24 19:38:55.740588] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82590 ] 00:24:30.387 [2024-04-24 19:38:55.905161] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.646 [2024-04-24 19:38:56.143385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.809  Copying: 238/1024 [MB] (238 MBps) Copying: 481/1024 [MB] (243 MBps) Copying: 723/1024 [MB] (241 MBps) Copying: 952/1024 [MB] (229 MBps) Copying: 1024/1024 [MB] (average 238 MBps) 00:24:36.809 00:24:36.809 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81826 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:36.809 19:39:02 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:36.809 [2024-04-24 19:39:02.218253] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:24:36.809 [2024-04-24 19:39:02.218376] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82655 ] 00:24:36.809 [2024-04-24 19:39:02.382605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.068 [2024-04-24 19:39:02.619172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.636 [2024-04-24 19:39:03.024396] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:37.636 [2024-04-24 19:39:03.024463] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:37.636 [2024-04-24 19:39:03.088109] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:37.636 [2024-04-24 19:39:03.088429] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:37.636 [2024-04-24 19:39:03.088784] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:37.896 [2024-04-24 19:39:03.327686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.896 [2024-04-24 19:39:03.327752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:37.896 [2024-04-24 19:39:03.327773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:37.896 [2024-04-24 19:39:03.327786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.896 [2024-04-24 19:39:03.327882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.896 [2024-04-24 19:39:03.327902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:37.896 [2024-04-24 19:39:03.327916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:24:37.896 [2024-04-24 19:39:03.327928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.896 [2024-04-24 19:39:03.327965] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:37.896 [2024-04-24 19:39:03.329125] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:37.896 [2024-04-24 19:39:03.329169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.896 [2024-04-24 19:39:03.329184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:37.896 [2024-04-24 19:39:03.329204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.216 ms 00:24:37.896 [2024-04-24 19:39:03.329217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.896 [2024-04-24 19:39:03.331108] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:37.896 [2024-04-24 19:39:03.351179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.896 [2024-04-24 19:39:03.351225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:37.896 [2024-04-24 19:39:03.351238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.111 ms 00:24:37.896 [2024-04-24 19:39:03.351247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.896 [2024-04-24 19:39:03.351341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.896 [2024-04-24 19:39:03.351352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:37.896 [2024-04-24 19:39:03.351361] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:37.896 [2024-04-24 19:39:03.351373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.896 [2024-04-24 19:39:03.358287] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.896 [2024-04-24 19:39:03.358324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:37.896 [2024-04-24 19:39:03.358351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.855 ms 00:24:37.896 [2024-04-24 19:39:03.358359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.896 [2024-04-24 19:39:03.358459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.896 [2024-04-24 19:39:03.358475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:37.896 [2024-04-24 19:39:03.358484] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:24:37.896 [2024-04-24 19:39:03.358492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.896 [2024-04-24 19:39:03.358539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.896 [2024-04-24 19:39:03.358549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:37.896 [2024-04-24 19:39:03.358558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:37.896 [2024-04-24 19:39:03.358565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.896 [2024-04-24 19:39:03.358591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:37.896 [2024-04-24 19:39:03.364482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.896 [2024-04-24 19:39:03.364512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:37.896 [2024-04-24 19:39:03.364533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.910 ms 00:24:37.896 [2024-04-24 19:39:03.364540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.896 [2024-04-24 19:39:03.364568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.896 [2024-04-24 19:39:03.364595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:37.896 [2024-04-24 19:39:03.364603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:37.896 [2024-04-24 19:39:03.364611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.896 [2024-04-24 19:39:03.364667] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:37.897 [2024-04-24 19:39:03.364690] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:37.897 [2024-04-24 19:39:03.364724] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:37.897 [2024-04-24 19:39:03.364742] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:37.897 [2024-04-24 19:39:03.364817] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:37.897 [2024-04-24 19:39:03.364827] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:37.897 [2024-04-24 19:39:03.364838] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:37.897 [2024-04-24 19:39:03.364849] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:37.897 [2024-04-24 19:39:03.364859] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:37.897 [2024-04-24 19:39:03.364867] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:37.897 [2024-04-24 19:39:03.364875] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:37.897 [2024-04-24 19:39:03.364883] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:37.897 [2024-04-24 19:39:03.364891] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:37.897 [2024-04-24 19:39:03.364899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.897 [2024-04-24 19:39:03.364907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:37.897 [2024-04-24 19:39:03.364918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:24:37.897 [2024-04-24 19:39:03.364925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.897 [2024-04-24 19:39:03.364985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.897 [2024-04-24 19:39:03.364998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:37.897 [2024-04-24 19:39:03.365007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:24:37.897 [2024-04-24 19:39:03.365015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.897 [2024-04-24 19:39:03.365086] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:37.897 [2024-04-24 19:39:03.365101] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:37.897 [2024-04-24 19:39:03.365113] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:37.897 [2024-04-24 19:39:03.365124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.897 [2024-04-24 19:39:03.365132] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:37.897 [2024-04-24 19:39:03.365140] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:37.897 [2024-04-24 19:39:03.365147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:37.897 [2024-04-24 19:39:03.365154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:37.897 [2024-04-24 19:39:03.365176] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:37.897 [2024-04-24 19:39:03.365184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:37.897 [2024-04-24 19:39:03.365191] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:37.897 [2024-04-24 19:39:03.365199] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:37.897 [2024-04-24 19:39:03.365206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:37.897 [2024-04-24 19:39:03.365213] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:37.897 [2024-04-24 19:39:03.365220] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:37.897 [2024-04-24 19:39:03.365227] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.897 [2024-04-24 19:39:03.365234] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:37.897 [2024-04-24 19:39:03.365242] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:37.897 [2024-04-24 19:39:03.365248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.897 [2024-04-24 19:39:03.365255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:37.897 [2024-04-24 19:39:03.365263] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:37.897 [2024-04-24 19:39:03.365270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:37.897 [2024-04-24 19:39:03.365278] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:37.897 [2024-04-24 19:39:03.365285] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:37.897 [2024-04-24 19:39:03.365292] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:37.897 [2024-04-24 19:39:03.365299] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:37.897 [2024-04-24 19:39:03.365305] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:37.897 [2024-04-24 19:39:03.365312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:37.897 [2024-04-24 19:39:03.365319] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:37.897 [2024-04-24 19:39:03.365326] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:37.897 [2024-04-24 19:39:03.365332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:37.897 [2024-04-24 19:39:03.365339] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:37.897 [2024-04-24 19:39:03.365347] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:37.897 [2024-04-24 19:39:03.365354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:37.897 [2024-04-24 19:39:03.365361] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:37.897 [2024-04-24 19:39:03.365368] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:37.897 [2024-04-24 19:39:03.365376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:37.897 [2024-04-24 19:39:03.365383] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:37.897 [2024-04-24 19:39:03.365390] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:37.897 [2024-04-24 19:39:03.365397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:37.897 [2024-04-24 19:39:03.365404] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:37.897 [2024-04-24 19:39:03.365412] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:37.897 [2024-04-24 19:39:03.365419] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:37.897 [2024-04-24 19:39:03.365427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.897 [2024-04-24 19:39:03.365435] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:37.897 [2024-04-24 19:39:03.365442] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:37.897 [2024-04-24 19:39:03.365449] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:37.897 [2024-04-24 19:39:03.365457] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:37.897 [2024-04-24 19:39:03.365464] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:37.897 [2024-04-24 19:39:03.365471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:37.897 [2024-04-24 19:39:03.365479] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:37.897 [2024-04-24 19:39:03.365490] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:37.897 [2024-04-24 19:39:03.365498] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:37.897 [2024-04-24 19:39:03.365506] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:37.897 [2024-04-24 19:39:03.365514] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:37.897 [2024-04-24 19:39:03.365521] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:37.897 [2024-04-24 19:39:03.365529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:37.897 [2024-04-24 19:39:03.365537] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:37.897 [2024-04-24 19:39:03.365544] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:37.897 [2024-04-24 19:39:03.365552] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:37.897 [2024-04-24 19:39:03.365560] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:37.897 [2024-04-24 19:39:03.365568] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:37.897 [2024-04-24 19:39:03.365576] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:37.897 [2024-04-24 19:39:03.365584] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:37.897 [2024-04-24 19:39:03.365592] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:37.897 [2024-04-24 19:39:03.365600] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:37.897 [2024-04-24 19:39:03.365609] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:37.897 [2024-04-24 19:39:03.365617] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:37.897 [2024-04-24 19:39:03.365625] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:37.897 [2024-04-24 19:39:03.365642] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:37.897 [2024-04-24 19:39:03.365651] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:37.897 [2024-04-24 19:39:03.365659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.897 [2024-04-24 19:39:03.365673] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:37.897 [2024-04-24 19:39:03.365681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:24:37.897 [2024-04-24 19:39:03.365688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.897 [2024-04-24 19:39:03.390547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.897 [2024-04-24 19:39:03.390584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:37.897 [2024-04-24 19:39:03.390595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.860 ms 00:24:37.898 [2024-04-24 19:39:03.390602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.898 [2024-04-24 19:39:03.390705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.898 [2024-04-24 19:39:03.390715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:37.898 [2024-04-24 19:39:03.390723] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:37.898 [2024-04-24 19:39:03.390730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.898 [2024-04-24 19:39:03.455855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.898 [2024-04-24 19:39:03.455907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:37.898 [2024-04-24 19:39:03.455920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.200 ms 00:24:37.898 [2024-04-24 19:39:03.455928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.898 [2024-04-24 19:39:03.455995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.898 [2024-04-24 19:39:03.456005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:37.898 [2024-04-24 19:39:03.456013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:37.898 [2024-04-24 19:39:03.456021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.898 [2024-04-24 19:39:03.456470] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.898 [2024-04-24 19:39:03.456481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:37.898 [2024-04-24 19:39:03.456489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:24:37.898 [2024-04-24 19:39:03.456496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.898 [2024-04-24 19:39:03.456592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.898 [2024-04-24 19:39:03.456604] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:37.898 [2024-04-24 19:39:03.456612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:37.898 [2024-04-24 19:39:03.456619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.898 [2024-04-24 19:39:03.479928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.898 [2024-04-24 19:39:03.479983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:37.898 [2024-04-24 19:39:03.479997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.312 ms 00:24:37.898 [2024-04-24 19:39:03.480007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.898 [2024-04-24 19:39:03.501118] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:37.898 [2024-04-24 19:39:03.501172] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:37.898 [2024-04-24 19:39:03.501186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.898 [2024-04-24 19:39:03.501195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:37.898 [2024-04-24 19:39:03.501205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.072 ms 00:24:37.898 [2024-04-24 19:39:03.501212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.898 [2024-04-24 19:39:03.533192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.898 [2024-04-24 19:39:03.533274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:37.898 [2024-04-24 19:39:03.533289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.974 ms 00:24:37.898 [2024-04-24 19:39:03.533309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.898 [2024-04-24 19:39:03.553464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.898 [2024-04-24 19:39:03.553500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:37.898 [2024-04-24 19:39:03.553511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.091 ms 00:24:37.898 [2024-04-24 19:39:03.553518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.157 [2024-04-24 19:39:03.572356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.157 [2024-04-24 19:39:03.572391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:38.157 [2024-04-24 19:39:03.572403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.837 ms 00:24:38.157 [2024-04-24 19:39:03.572410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.157 [2024-04-24 19:39:03.572911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.157 [2024-04-24 19:39:03.572924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:38.157 [2024-04-24 19:39:03.572933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:24:38.157 [2024-04-24 19:39:03.572940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.157 [2024-04-24 19:39:03.660278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.157 [2024-04-24 19:39:03.660341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:38.157 [2024-04-24 19:39:03.660356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.489 ms 00:24:38.158 [2024-04-24 19:39:03.660364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.158 [2024-04-24 19:39:03.672921] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:38.158 [2024-04-24 19:39:03.676056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.158 [2024-04-24 19:39:03.676089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:38.158 [2024-04-24 19:39:03.676101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.646 ms 00:24:38.158 [2024-04-24 19:39:03.676109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.158 [2024-04-24 19:39:03.676202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.158 [2024-04-24 19:39:03.676212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:38.158 [2024-04-24 19:39:03.676220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:38.158 [2024-04-24 19:39:03.676226] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.158 [2024-04-24 19:39:03.676284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.158 [2024-04-24 19:39:03.676293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:38.158 [2024-04-24 19:39:03.676303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:38.158 [2024-04-24 19:39:03.676309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.158 [2024-04-24 19:39:03.677925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.158 [2024-04-24 19:39:03.678026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:38.158 [2024-04-24 19:39:03.678036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:24:38.158 [2024-04-24 19:39:03.678043] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.158 [2024-04-24 19:39:03.678072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.158 [2024-04-24 19:39:03.678081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:38.158 [2024-04-24 19:39:03.678088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:38.158 [2024-04-24 19:39:03.678098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.158 [2024-04-24 19:39:03.678128] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:38.158 [2024-04-24 19:39:03.678137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.158 [2024-04-24 19:39:03.678144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:38.158 [2024-04-24 19:39:03.678151] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:38.158 [2024-04-24 19:39:03.678158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.158 [2024-04-24 19:39:03.715712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.158 [2024-04-24 19:39:03.715749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:38.158 [2024-04-24 19:39:03.715766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.609 ms 00:24:38.158 [2024-04-24 19:39:03.715775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.158 [2024-04-24 19:39:03.715842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:38.158 [2024-04-24 19:39:03.715852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:38.158 [2024-04-24 19:39:03.715860] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:38.158 [2024-04-24 19:39:03.715867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:38.158 [2024-04-24 19:39:03.717008] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.597 ms, result 0 00:25:10.996  Copying: 33/1024 [MB] (33 MBps) Copying: 65/1024 [MB] (32 MBps) Copying: 96/1024 [MB] (31 MBps) Copying: 128/1024 [MB] (31 MBps) Copying: 159/1024 [MB] (30 MBps) Copying: 194/1024 [MB] (35 MBps) Copying: 226/1024 [MB] (32 MBps) Copying: 259/1024 [MB] (32 MBps) Copying: 292/1024 [MB] (33 MBps) Copying: 324/1024 [MB] (31 MBps) Copying: 356/1024 [MB] (32 MBps) Copying: 388/1024 [MB] (31 MBps) Copying: 420/1024 [MB] (31 MBps) Copying: 451/1024 [MB] (31 MBps) Copying: 483/1024 [MB] (32 MBps) Copying: 515/1024 [MB] (31 MBps) Copying: 547/1024 [MB] (31 MBps) Copying: 579/1024 [MB] (32 MBps) Copying: 611/1024 [MB] (32 MBps) Copying: 643/1024 [MB] (31 MBps) Copying: 675/1024 [MB] (31 MBps) Copying: 707/1024 [MB] (31 MBps) Copying: 739/1024 [MB] (32 MBps) Copying: 772/1024 [MB] (33 MBps) Copying: 806/1024 [MB] (33 MBps) Copying: 839/1024 [MB] (32 MBps) Copying: 870/1024 [MB] (31 MBps) Copying: 902/1024 [MB] (31 MBps) Copying: 933/1024 [MB] (31 MBps) Copying: 964/1024 [MB] (31 MBps) Copying: 995/1024 [MB] (31 MBps) Copying: 1023/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 31 MBps)[2024-04-24 19:39:36.412462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.996 [2024-04-24 19:39:36.412599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:10.996 [2024-04-24 19:39:36.412625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:10.996 [2024-04-24 19:39:36.412651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.996 [2024-04-24 19:39:36.415451] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:10.996 [2024-04-24 19:39:36.420311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.996 [2024-04-24 19:39:36.420346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:10.996 [2024-04-24 19:39:36.420357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.824 ms 00:25:10.996 [2024-04-24 19:39:36.420365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.996 [2024-04-24 19:39:36.445086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.996 [2024-04-24 19:39:36.445161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:10.996 [2024-04-24 19:39:36.445200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.936 ms 00:25:10.996 [2024-04-24 19:39:36.445216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.996 [2024-04-24 19:39:36.470888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.996 [2024-04-24 19:39:36.470934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:10.996 [2024-04-24 19:39:36.470948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.692 ms 00:25:10.996 [2024-04-24 19:39:36.470958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.996 [2024-04-24 19:39:36.476183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.996 [2024-04-24 19:39:36.476216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:10.996 [2024-04-24 19:39:36.476226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.200 ms 00:25:10.996 [2024-04-24 19:39:36.476241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.996 [2024-04-24 19:39:36.516796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.996 [2024-04-24 19:39:36.516868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:10.996 [2024-04-24 19:39:36.516881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.584 ms 00:25:10.996 [2024-04-24 19:39:36.516906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.996 [2024-04-24 19:39:36.540750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.996 [2024-04-24 19:39:36.540800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:10.996 [2024-04-24 19:39:36.540814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.832 ms 00:25:10.996 [2024-04-24 19:39:36.540822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.996 [2024-04-24 19:39:36.618800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.996 [2024-04-24 19:39:36.618886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:10.996 [2024-04-24 19:39:36.618912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 78.074 ms 00:25:10.996 [2024-04-24 19:39:36.618921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:10.996 [2024-04-24 19:39:36.662974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:10.996 [2024-04-24 19:39:36.663112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:10.996 [2024-04-24 19:39:36.663166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.100 ms 00:25:10.996 [2024-04-24 19:39:36.663186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.257 [2024-04-24 19:39:36.702862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.257 [2024-04-24 19:39:36.702991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:11.257 [2024-04-24 19:39:36.703020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.682 ms 00:25:11.257 [2024-04-24 19:39:36.703039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.257 [2024-04-24 19:39:36.742869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.257 [2024-04-24 19:39:36.743012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:11.257 [2024-04-24 19:39:36.743043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.846 ms 00:25:11.257 [2024-04-24 19:39:36.743063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.257 [2024-04-24 19:39:36.782111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.257 [2024-04-24 19:39:36.782237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:11.257 [2024-04-24 19:39:36.782270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.005 ms 00:25:11.257 [2024-04-24 19:39:36.782306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.257 [2024-04-24 19:39:36.782376] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:11.257 [2024-04-24 19:39:36.782426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 109312 / 261120 wr_cnt: 1 state: open 00:25:11.257 [2024-04-24 19:39:36.782467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.782996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:11.257 [2024-04-24 19:39:36.783204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:11.258 [2024-04-24 19:39:36.783565] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:11.258 [2024-04-24 19:39:36.783588] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e036e91-3551-4163-9401-75fe915f2698 00:25:11.258 [2024-04-24 19:39:36.783597] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 109312 00:25:11.258 [2024-04-24 19:39:36.783606] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 110272 00:25:11.258 [2024-04-24 19:39:36.783614] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 109312 00:25:11.258 [2024-04-24 19:39:36.783627] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0088 00:25:11.258 [2024-04-24 19:39:36.783635] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:11.258 [2024-04-24 19:39:36.783644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:11.258 [2024-04-24 19:39:36.783660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:11.258 [2024-04-24 19:39:36.783668] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:11.258 [2024-04-24 19:39:36.783675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:11.258 [2024-04-24 19:39:36.783684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.258 [2024-04-24 19:39:36.783692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:11.258 [2024-04-24 19:39:36.783701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.311 ms 00:25:11.258 [2024-04-24 19:39:36.783709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.258 [2024-04-24 19:39:36.804009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.258 [2024-04-24 19:39:36.804058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:11.258 [2024-04-24 19:39:36.804070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.300 ms 00:25:11.258 [2024-04-24 19:39:36.804076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.258 [2024-04-24 19:39:36.804318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.258 [2024-04-24 19:39:36.804325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:11.258 [2024-04-24 19:39:36.804332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:25:11.258 [2024-04-24 19:39:36.804339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.258 [2024-04-24 19:39:36.857630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.258 [2024-04-24 19:39:36.857702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:11.258 [2024-04-24 19:39:36.857715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.258 [2024-04-24 19:39:36.857723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.258 [2024-04-24 19:39:36.857791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.258 [2024-04-24 19:39:36.857800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:11.258 [2024-04-24 19:39:36.857807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.258 [2024-04-24 19:39:36.857814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.258 [2024-04-24 19:39:36.857876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.258 [2024-04-24 19:39:36.857891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:11.258 [2024-04-24 19:39:36.857898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.258 [2024-04-24 19:39:36.857905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.258 [2024-04-24 19:39:36.857920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.258 [2024-04-24 19:39:36.857927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:11.258 [2024-04-24 19:39:36.857934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.258 [2024-04-24 19:39:36.857940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.518 [2024-04-24 19:39:36.979819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.518 [2024-04-24 19:39:36.979873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:11.518 [2024-04-24 19:39:36.979887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.518 [2024-04-24 19:39:36.979895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.518 [2024-04-24 19:39:37.027710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.518 [2024-04-24 19:39:37.027767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:11.518 [2024-04-24 19:39:37.027782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.518 [2024-04-24 19:39:37.027790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.518 [2024-04-24 19:39:37.027852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.518 [2024-04-24 19:39:37.027862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:11.518 [2024-04-24 19:39:37.027877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.518 [2024-04-24 19:39:37.027886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.518 [2024-04-24 19:39:37.027917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.518 [2024-04-24 19:39:37.027927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:11.518 [2024-04-24 19:39:37.027936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.518 [2024-04-24 19:39:37.027943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.518 [2024-04-24 19:39:37.028045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.518 [2024-04-24 19:39:37.028057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:11.518 [2024-04-24 19:39:37.028065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.518 [2024-04-24 19:39:37.028077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.518 [2024-04-24 19:39:37.028111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.518 [2024-04-24 19:39:37.028121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:11.518 [2024-04-24 19:39:37.028130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.518 [2024-04-24 19:39:37.028138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.518 [2024-04-24 19:39:37.028176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.518 [2024-04-24 19:39:37.028185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:11.518 [2024-04-24 19:39:37.028193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.518 [2024-04-24 19:39:37.028204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.518 [2024-04-24 19:39:37.028251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:11.518 [2024-04-24 19:39:37.028260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:11.518 [2024-04-24 19:39:37.028268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:11.518 [2024-04-24 19:39:37.028275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.518 [2024-04-24 19:39:37.028407] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 619.916 ms, result 0 00:25:13.422 00:25:13.422 00:25:13.681 19:39:39 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:15.597 19:39:40 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:15.597 [2024-04-24 19:39:40.932475] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:25:15.597 [2024-04-24 19:39:40.932580] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83047 ] 00:25:15.597 [2024-04-24 19:39:41.093671] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.856 [2024-04-24 19:39:41.334296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.115 [2024-04-24 19:39:41.722930] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:16.115 [2024-04-24 19:39:41.722987] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:16.375 [2024-04-24 19:39:41.872464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.375 [2024-04-24 19:39:41.872517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:16.375 [2024-04-24 19:39:41.872534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:16.376 [2024-04-24 19:39:41.872543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.872614] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.872629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:16.376 [2024-04-24 19:39:41.872655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:16.376 [2024-04-24 19:39:41.872680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.872709] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:16.376 [2024-04-24 19:39:41.873819] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:16.376 [2024-04-24 19:39:41.873853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.873866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:16.376 [2024-04-24 19:39:41.873877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:25:16.376 [2024-04-24 19:39:41.873888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.875431] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:16.376 [2024-04-24 19:39:41.895693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.895736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:16.376 [2024-04-24 19:39:41.895760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.302 ms 00:25:16.376 [2024-04-24 19:39:41.895770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.895853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.895869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:16.376 [2024-04-24 19:39:41.895882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:16.376 [2024-04-24 19:39:41.895892] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.902869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.902905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:16.376 [2024-04-24 19:39:41.902919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.905 ms 00:25:16.376 [2024-04-24 19:39:41.902928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.903056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.903073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:16.376 [2024-04-24 19:39:41.903086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:25:16.376 [2024-04-24 19:39:41.903096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.903159] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.903177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:16.376 [2024-04-24 19:39:41.903189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:16.376 [2024-04-24 19:39:41.903199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.903232] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:16.376 [2024-04-24 19:39:41.908935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.908967] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:16.376 [2024-04-24 19:39:41.908980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.723 ms 00:25:16.376 [2024-04-24 19:39:41.908990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.909025] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.909037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:16.376 [2024-04-24 19:39:41.909047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:16.376 [2024-04-24 19:39:41.909056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.909111] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:16.376 [2024-04-24 19:39:41.909145] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:16.376 [2024-04-24 19:39:41.909183] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:16.376 [2024-04-24 19:39:41.909203] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:16.376 [2024-04-24 19:39:41.909274] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:16.376 [2024-04-24 19:39:41.909289] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:16.376 [2024-04-24 19:39:41.909302] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:16.376 [2024-04-24 19:39:41.909317] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:16.376 [2024-04-24 19:39:41.909329] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:16.376 [2024-04-24 19:39:41.909345] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:16.376 [2024-04-24 19:39:41.909355] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:16.376 [2024-04-24 19:39:41.909366] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:16.376 [2024-04-24 19:39:41.909376] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:16.376 [2024-04-24 19:39:41.909386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.909397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:16.376 [2024-04-24 19:39:41.909408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:25:16.376 [2024-04-24 19:39:41.909418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.909479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.909492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:16.376 [2024-04-24 19:39:41.909506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:16.376 [2024-04-24 19:39:41.909516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.909587] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:16.376 [2024-04-24 19:39:41.909601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:16.376 [2024-04-24 19:39:41.909612] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:16.376 [2024-04-24 19:39:41.909624] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.376 [2024-04-24 19:39:41.909652] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:16.376 [2024-04-24 19:39:41.909663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:16.376 [2024-04-24 19:39:41.909689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:16.376 [2024-04-24 19:39:41.909700] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:16.376 [2024-04-24 19:39:41.909710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:16.376 [2024-04-24 19:39:41.909719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:16.376 [2024-04-24 19:39:41.909729] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:16.376 [2024-04-24 19:39:41.909739] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:16.376 [2024-04-24 19:39:41.909765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:16.376 [2024-04-24 19:39:41.909776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:16.376 [2024-04-24 19:39:41.909786] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:16.376 [2024-04-24 19:39:41.909796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.376 [2024-04-24 19:39:41.909805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:16.376 [2024-04-24 19:39:41.909815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:16.376 [2024-04-24 19:39:41.909825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.376 [2024-04-24 19:39:41.909835] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:16.376 [2024-04-24 19:39:41.909845] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:16.376 [2024-04-24 19:39:41.909854] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:16.376 [2024-04-24 19:39:41.909864] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:16.376 [2024-04-24 19:39:41.909874] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:16.376 [2024-04-24 19:39:41.909884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:16.376 [2024-04-24 19:39:41.909893] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:16.376 [2024-04-24 19:39:41.909903] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:16.376 [2024-04-24 19:39:41.909912] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:16.376 [2024-04-24 19:39:41.909921] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:16.376 [2024-04-24 19:39:41.909934] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:16.376 [2024-04-24 19:39:41.909944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:16.376 [2024-04-24 19:39:41.909954] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:16.376 [2024-04-24 19:39:41.909963] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:16.376 [2024-04-24 19:39:41.909973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:16.376 [2024-04-24 19:39:41.909983] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:16.376 [2024-04-24 19:39:41.909992] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:16.376 [2024-04-24 19:39:41.910002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:16.376 [2024-04-24 19:39:41.910012] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:16.376 [2024-04-24 19:39:41.910021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:16.376 [2024-04-24 19:39:41.910031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:16.376 [2024-04-24 19:39:41.910040] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:16.376 [2024-04-24 19:39:41.910051] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:16.376 [2024-04-24 19:39:41.910066] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:16.376 [2024-04-24 19:39:41.910082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.376 [2024-04-24 19:39:41.910092] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:16.376 [2024-04-24 19:39:41.910102] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:16.376 [2024-04-24 19:39:41.910113] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:16.376 [2024-04-24 19:39:41.910122] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:16.376 [2024-04-24 19:39:41.910131] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:16.376 [2024-04-24 19:39:41.910141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:16.376 [2024-04-24 19:39:41.910153] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:16.376 [2024-04-24 19:39:41.910166] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:16.376 [2024-04-24 19:39:41.910179] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:16.376 [2024-04-24 19:39:41.910190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:16.376 [2024-04-24 19:39:41.910200] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:16.376 [2024-04-24 19:39:41.910211] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:16.376 [2024-04-24 19:39:41.910222] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:16.376 [2024-04-24 19:39:41.910249] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:16.376 [2024-04-24 19:39:41.910261] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:16.376 [2024-04-24 19:39:41.910272] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:16.376 [2024-04-24 19:39:41.910283] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:16.376 [2024-04-24 19:39:41.910295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:16.376 [2024-04-24 19:39:41.910306] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:16.376 [2024-04-24 19:39:41.910318] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:16.376 [2024-04-24 19:39:41.910330] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:16.376 [2024-04-24 19:39:41.910341] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:16.376 [2024-04-24 19:39:41.910354] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:16.376 [2024-04-24 19:39:41.910366] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:16.376 [2024-04-24 19:39:41.910377] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:16.376 [2024-04-24 19:39:41.910389] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:16.376 [2024-04-24 19:39:41.910401] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:16.376 [2024-04-24 19:39:41.910414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.910425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:16.376 [2024-04-24 19:39:41.910437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.864 ms 00:25:16.376 [2024-04-24 19:39:41.910448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.934862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.934898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:16.376 [2024-04-24 19:39:41.934914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.400 ms 00:25:16.376 [2024-04-24 19:39:41.934925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.935026] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.935044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:16.376 [2024-04-24 19:39:41.935055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:25:16.376 [2024-04-24 19:39:41.935065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.996437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.996485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:16.376 [2024-04-24 19:39:41.996502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.429 ms 00:25:16.376 [2024-04-24 19:39:41.996516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.996588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.996599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:16.376 [2024-04-24 19:39:41.996609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:16.376 [2024-04-24 19:39:41.996618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.997186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.997215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:16.376 [2024-04-24 19:39:41.997228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:25:16.376 [2024-04-24 19:39:41.997239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:41.997415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:41.997439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:16.376 [2024-04-24 19:39:41.997451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:25:16.376 [2024-04-24 19:39:41.997461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:42.018393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:42.018434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:16.376 [2024-04-24 19:39:42.018450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.938 ms 00:25:16.376 [2024-04-24 19:39:42.018460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.376 [2024-04-24 19:39:42.038426] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:16.376 [2024-04-24 19:39:42.038486] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:16.376 [2024-04-24 19:39:42.038505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.376 [2024-04-24 19:39:42.038516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:16.376 [2024-04-24 19:39:42.038530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.936 ms 00:25:16.376 [2024-04-24 19:39:42.038538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.636 [2024-04-24 19:39:42.068915] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.636 [2024-04-24 19:39:42.068960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:16.636 [2024-04-24 19:39:42.068976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.350 ms 00:25:16.636 [2024-04-24 19:39:42.068986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.636 [2024-04-24 19:39:42.087179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.636 [2024-04-24 19:39:42.087230] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:16.636 [2024-04-24 19:39:42.087246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.168 ms 00:25:16.636 [2024-04-24 19:39:42.087263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.636 [2024-04-24 19:39:42.106724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.636 [2024-04-24 19:39:42.106770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:16.636 [2024-04-24 19:39:42.106787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.409 ms 00:25:16.636 [2024-04-24 19:39:42.106796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.636 [2024-04-24 19:39:42.107316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.636 [2024-04-24 19:39:42.107366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:16.636 [2024-04-24 19:39:42.107381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:25:16.636 [2024-04-24 19:39:42.107393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.636 [2024-04-24 19:39:42.199517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.636 [2024-04-24 19:39:42.199583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:16.636 [2024-04-24 19:39:42.199601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.266 ms 00:25:16.636 [2024-04-24 19:39:42.199611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.636 [2024-04-24 19:39:42.213308] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:16.636 [2024-04-24 19:39:42.216794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.636 [2024-04-24 19:39:42.216834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:16.636 [2024-04-24 19:39:42.216852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.112 ms 00:25:16.636 [2024-04-24 19:39:42.216863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.636 [2024-04-24 19:39:42.216987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.636 [2024-04-24 19:39:42.217012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:16.636 [2024-04-24 19:39:42.217025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:16.636 [2024-04-24 19:39:42.217037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.636 [2024-04-24 19:39:42.218455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.636 [2024-04-24 19:39:42.218499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:16.636 [2024-04-24 19:39:42.218513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:25:16.636 [2024-04-24 19:39:42.218524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.636 [2024-04-24 19:39:42.220563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.636 [2024-04-24 19:39:42.220595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:16.636 [2024-04-24 19:39:42.220612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.995 ms 00:25:16.636 [2024-04-24 19:39:42.220623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.636 [2024-04-24 19:39:42.220683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.636 [2024-04-24 19:39:42.220698] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:16.636 [2024-04-24 19:39:42.220726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:16.636 [2024-04-24 19:39:42.220738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.636 [2024-04-24 19:39:42.220786] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:16.636 [2024-04-24 19:39:42.220803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.636 [2024-04-24 19:39:42.220816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:16.636 [2024-04-24 19:39:42.220829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:16.636 [2024-04-24 19:39:42.220846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.636 [2024-04-24 19:39:42.263652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.636 [2024-04-24 19:39:42.263697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:16.636 [2024-04-24 19:39:42.263714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.856 ms 00:25:16.636 [2024-04-24 19:39:42.263724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.637 [2024-04-24 19:39:42.263821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.637 [2024-04-24 19:39:42.263841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:16.637 [2024-04-24 19:39:42.263852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:25:16.637 [2024-04-24 19:39:42.263862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.637 [2024-04-24 19:39:42.270843] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 397.542 ms, result 0 00:25:46.249  Copying: 1084/1048576 [kB] (1084 kBps) Copying: 6312/1048576 [kB] (5228 kBps) Copying: 43/1024 [MB] (37 MBps) Copying: 82/1024 [MB] (39 MBps) Copying: 120/1024 [MB] (37 MBps) Copying: 158/1024 [MB] (37 MBps) Copying: 195/1024 [MB] (37 MBps) Copying: 233/1024 [MB] (37 MBps) Copying: 270/1024 [MB] (37 MBps) Copying: 308/1024 [MB] (37 MBps) Copying: 347/1024 [MB] (38 MBps) Copying: 383/1024 [MB] (35 MBps) Copying: 420/1024 [MB] (37 MBps) Copying: 458/1024 [MB] (37 MBps) Copying: 496/1024 [MB] (37 MBps) Copying: 533/1024 [MB] (36 MBps) Copying: 570/1024 [MB] (36 MBps) Copying: 607/1024 [MB] (37 MBps) Copying: 643/1024 [MB] (36 MBps) Copying: 681/1024 [MB] (38 MBps) Copying: 719/1024 [MB] (38 MBps) Copying: 759/1024 [MB] (39 MBps) Copying: 797/1024 [MB] (38 MBps) Copying: 836/1024 [MB] (38 MBps) Copying: 875/1024 [MB] (39 MBps) Copying: 915/1024 [MB] (40 MBps) Copying: 954/1024 [MB] (38 MBps) Copying: 993/1024 [MB] (39 MBps) Copying: 1024/1024 [MB] (average 35 MBps)[2024-04-24 19:40:11.617965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.249 [2024-04-24 19:40:11.618156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:46.249 [2024-04-24 19:40:11.618398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:46.249 [2024-04-24 19:40:11.618429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.249 [2024-04-24 19:40:11.618505] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:46.249 [2024-04-24 19:40:11.623874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.249 [2024-04-24 19:40:11.623974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:46.249 [2024-04-24 19:40:11.624017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.323 ms 00:25:46.249 [2024-04-24 19:40:11.624042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.249 [2024-04-24 19:40:11.624424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.249 [2024-04-24 19:40:11.624482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:46.249 [2024-04-24 19:40:11.624516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:25:46.249 [2024-04-24 19:40:11.624546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.249 [2024-04-24 19:40:11.634978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.249 [2024-04-24 19:40:11.635087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:46.249 [2024-04-24 19:40:11.635128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.410 ms 00:25:46.249 [2024-04-24 19:40:11.635160] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.249 [2024-04-24 19:40:11.640285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.249 [2024-04-24 19:40:11.640361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:46.249 [2024-04-24 19:40:11.640390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.075 ms 00:25:46.249 [2024-04-24 19:40:11.640427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.249 [2024-04-24 19:40:11.676893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.249 [2024-04-24 19:40:11.676999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:46.249 [2024-04-24 19:40:11.677026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.448 ms 00:25:46.249 [2024-04-24 19:40:11.677045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.249 [2024-04-24 19:40:11.697371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.249 [2024-04-24 19:40:11.697475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:46.249 [2024-04-24 19:40:11.697509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.315 ms 00:25:46.249 [2024-04-24 19:40:11.697528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.249 [2024-04-24 19:40:11.700988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.249 [2024-04-24 19:40:11.701060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:46.249 [2024-04-24 19:40:11.701090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.416 ms 00:25:46.249 [2024-04-24 19:40:11.701109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.249 [2024-04-24 19:40:11.736930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.249 [2024-04-24 19:40:11.737037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:46.249 [2024-04-24 19:40:11.737064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.861 ms 00:25:46.249 [2024-04-24 19:40:11.737083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.249 [2024-04-24 19:40:11.773553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.249 [2024-04-24 19:40:11.773649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:46.249 [2024-04-24 19:40:11.773693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.487 ms 00:25:46.249 [2024-04-24 19:40:11.773712] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.249 [2024-04-24 19:40:11.808363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.249 [2024-04-24 19:40:11.808437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:46.249 [2024-04-24 19:40:11.808465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.673 ms 00:25:46.249 [2024-04-24 19:40:11.808484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.249 [2024-04-24 19:40:11.845288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.249 [2024-04-24 19:40:11.845374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:46.249 [2024-04-24 19:40:11.845401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.779 ms 00:25:46.249 [2024-04-24 19:40:11.845419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.249 [2024-04-24 19:40:11.845479] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:46.249 [2024-04-24 19:40:11.845543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:46.249 [2024-04-24 19:40:11.845579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:25:46.249 [2024-04-24 19:40:11.845655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.845700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.845764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.845806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.845852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.845897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.845930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.845958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.846009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.846047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.846086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.846121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.846159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.846199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.846245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:46.249 [2024-04-24 19:40:11.846279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.846996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:46.250 [2024-04-24 19:40:11.847401] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:46.250 [2024-04-24 19:40:11.847409] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e036e91-3551-4163-9401-75fe915f2698 00:25:46.250 [2024-04-24 19:40:11.847417] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:25:46.250 [2024-04-24 19:40:11.847424] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 157632 00:25:46.250 [2024-04-24 19:40:11.847431] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 155648 00:25:46.250 [2024-04-24 19:40:11.847438] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:25:46.250 [2024-04-24 19:40:11.847461] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:46.250 [2024-04-24 19:40:11.847469] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:46.250 [2024-04-24 19:40:11.847480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:46.250 [2024-04-24 19:40:11.847487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:46.250 [2024-04-24 19:40:11.847493] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:46.251 [2024-04-24 19:40:11.847500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.251 [2024-04-24 19:40:11.847508] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:46.251 [2024-04-24 19:40:11.847515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.041 ms 00:25:46.251 [2024-04-24 19:40:11.847523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.251 [2024-04-24 19:40:11.867211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.251 [2024-04-24 19:40:11.867246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:46.251 [2024-04-24 19:40:11.867256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.696 ms 00:25:46.251 [2024-04-24 19:40:11.867270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.251 [2024-04-24 19:40:11.867538] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:46.251 [2024-04-24 19:40:11.867549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:46.251 [2024-04-24 19:40:11.867559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:25:46.251 [2024-04-24 19:40:11.867566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.251 [2024-04-24 19:40:11.922528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.251 [2024-04-24 19:40:11.922572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:46.251 [2024-04-24 19:40:11.922588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.251 [2024-04-24 19:40:11.922611] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.251 [2024-04-24 19:40:11.922699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.251 [2024-04-24 19:40:11.922711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:46.251 [2024-04-24 19:40:11.922719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.251 [2024-04-24 19:40:11.922725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.251 [2024-04-24 19:40:11.922783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.251 [2024-04-24 19:40:11.922793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:46.251 [2024-04-24 19:40:11.922801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.251 [2024-04-24 19:40:11.922812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.251 [2024-04-24 19:40:11.922827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.251 [2024-04-24 19:40:11.922834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:46.251 [2024-04-24 19:40:11.922842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.251 [2024-04-24 19:40:11.922849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.511 [2024-04-24 19:40:12.042285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.511 [2024-04-24 19:40:12.042335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:46.511 [2024-04-24 19:40:12.042346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.511 [2024-04-24 19:40:12.042358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.511 [2024-04-24 19:40:12.088572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.511 [2024-04-24 19:40:12.088629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:46.511 [2024-04-24 19:40:12.088658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.511 [2024-04-24 19:40:12.088666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.511 [2024-04-24 19:40:12.088725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.511 [2024-04-24 19:40:12.088733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:46.511 [2024-04-24 19:40:12.088741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.511 [2024-04-24 19:40:12.088748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.511 [2024-04-24 19:40:12.088783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.511 [2024-04-24 19:40:12.088791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:46.511 [2024-04-24 19:40:12.088798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.511 [2024-04-24 19:40:12.088805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.511 [2024-04-24 19:40:12.088918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.511 [2024-04-24 19:40:12.088929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:46.511 [2024-04-24 19:40:12.088936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.511 [2024-04-24 19:40:12.088943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.511 [2024-04-24 19:40:12.088985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.511 [2024-04-24 19:40:12.088998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:46.511 [2024-04-24 19:40:12.089005] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.511 [2024-04-24 19:40:12.089011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.511 [2024-04-24 19:40:12.089047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.511 [2024-04-24 19:40:12.089054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:46.511 [2024-04-24 19:40:12.089060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.511 [2024-04-24 19:40:12.089067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.511 [2024-04-24 19:40:12.089107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:46.511 [2024-04-24 19:40:12.089116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:46.511 [2024-04-24 19:40:12.089123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:46.511 [2024-04-24 19:40:12.089129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:46.511 [2024-04-24 19:40:12.089293] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 472.200 ms, result 0 00:25:47.888 00:25:47.888 00:25:47.888 19:40:13 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:49.802 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:49.802 19:40:15 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:49.802 [2024-04-24 19:40:15.082045] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:25:49.802 [2024-04-24 19:40:15.082229] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83389 ] 00:25:49.802 [2024-04-24 19:40:15.246011] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.802 [2024-04-24 19:40:15.476630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.369 [2024-04-24 19:40:15.875943] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:50.369 [2024-04-24 19:40:15.876066] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:50.369 [2024-04-24 19:40:16.025561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.369 [2024-04-24 19:40:16.025718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:50.369 [2024-04-24 19:40:16.025770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:50.370 [2024-04-24 19:40:16.025796] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.370 [2024-04-24 19:40:16.025874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.370 [2024-04-24 19:40:16.025904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:50.370 [2024-04-24 19:40:16.025971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:50.370 [2024-04-24 19:40:16.025991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.370 [2024-04-24 19:40:16.026039] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:50.370 [2024-04-24 19:40:16.027242] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:50.370 [2024-04-24 19:40:16.027337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.370 [2024-04-24 19:40:16.027361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:50.370 [2024-04-24 19:40:16.027382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.306 ms 00:25:50.370 [2024-04-24 19:40:16.027418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.370 [2024-04-24 19:40:16.028825] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:50.630 [2024-04-24 19:40:16.047300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.630 [2024-04-24 19:40:16.047372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:50.630 [2024-04-24 19:40:16.047410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.495 ms 00:25:50.630 [2024-04-24 19:40:16.047430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.630 [2024-04-24 19:40:16.047513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.630 [2024-04-24 19:40:16.047548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:50.630 [2024-04-24 19:40:16.047603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:50.630 [2024-04-24 19:40:16.047624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.630 [2024-04-24 19:40:16.054380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.630 [2024-04-24 19:40:16.054444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:50.630 [2024-04-24 19:40:16.054491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.635 ms 00:25:50.630 [2024-04-24 19:40:16.054510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.630 [2024-04-24 19:40:16.054615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.630 [2024-04-24 19:40:16.054666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:50.630 [2024-04-24 19:40:16.054710] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:50.630 [2024-04-24 19:40:16.054735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.630 [2024-04-24 19:40:16.054793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.630 [2024-04-24 19:40:16.054824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:50.630 [2024-04-24 19:40:16.054852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:50.630 [2024-04-24 19:40:16.054873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.630 [2024-04-24 19:40:16.054940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:50.630 [2024-04-24 19:40:16.060489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.630 [2024-04-24 19:40:16.060553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:50.630 [2024-04-24 19:40:16.060584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.568 ms 00:25:50.630 [2024-04-24 19:40:16.060602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.630 [2024-04-24 19:40:16.060661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.630 [2024-04-24 19:40:16.060685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:50.630 [2024-04-24 19:40:16.060732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:50.630 [2024-04-24 19:40:16.060757] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.630 [2024-04-24 19:40:16.060829] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:50.630 [2024-04-24 19:40:16.060898] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:50.630 [2024-04-24 19:40:16.060958] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:50.630 [2024-04-24 19:40:16.061000] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:50.630 [2024-04-24 19:40:16.061118] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:50.630 [2024-04-24 19:40:16.061158] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:50.630 [2024-04-24 19:40:16.061194] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:50.630 [2024-04-24 19:40:16.061248] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:50.630 [2024-04-24 19:40:16.061285] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:50.630 [2024-04-24 19:40:16.061329] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:50.630 [2024-04-24 19:40:16.061354] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:50.630 [2024-04-24 19:40:16.061380] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:50.630 [2024-04-24 19:40:16.061409] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:50.630 [2024-04-24 19:40:16.061436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.630 [2024-04-24 19:40:16.061463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:50.630 [2024-04-24 19:40:16.061491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:25:50.630 [2024-04-24 19:40:16.061518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.630 [2024-04-24 19:40:16.061594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.631 [2024-04-24 19:40:16.061605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:50.631 [2024-04-24 19:40:16.061615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:25:50.631 [2024-04-24 19:40:16.061622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.631 [2024-04-24 19:40:16.061691] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:50.631 [2024-04-24 19:40:16.061702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:50.631 [2024-04-24 19:40:16.061710] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:50.631 [2024-04-24 19:40:16.061717] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.631 [2024-04-24 19:40:16.061725] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:50.631 [2024-04-24 19:40:16.061731] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:50.631 [2024-04-24 19:40:16.061737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:50.631 [2024-04-24 19:40:16.061743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:50.631 [2024-04-24 19:40:16.061749] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:50.631 [2024-04-24 19:40:16.061756] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:50.631 [2024-04-24 19:40:16.061762] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:50.631 [2024-04-24 19:40:16.061768] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:50.631 [2024-04-24 19:40:16.061787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:50.631 [2024-04-24 19:40:16.061793] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:50.631 [2024-04-24 19:40:16.061801] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:50.631 [2024-04-24 19:40:16.061807] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.631 [2024-04-24 19:40:16.061813] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:50.631 [2024-04-24 19:40:16.061819] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:50.631 [2024-04-24 19:40:16.061825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.631 [2024-04-24 19:40:16.061831] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:50.631 [2024-04-24 19:40:16.061837] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:50.631 [2024-04-24 19:40:16.061844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:50.631 [2024-04-24 19:40:16.061850] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:50.631 [2024-04-24 19:40:16.061857] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:50.631 [2024-04-24 19:40:16.061863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:50.631 [2024-04-24 19:40:16.061868] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:50.631 [2024-04-24 19:40:16.061875] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:50.631 [2024-04-24 19:40:16.061880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:50.631 [2024-04-24 19:40:16.061886] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:50.631 [2024-04-24 19:40:16.061892] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:50.631 [2024-04-24 19:40:16.061897] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:50.631 [2024-04-24 19:40:16.061903] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:50.631 [2024-04-24 19:40:16.061909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:50.631 [2024-04-24 19:40:16.061915] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:50.631 [2024-04-24 19:40:16.061922] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:50.631 [2024-04-24 19:40:16.061928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:50.631 [2024-04-24 19:40:16.061934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:50.631 [2024-04-24 19:40:16.061940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:50.631 [2024-04-24 19:40:16.061946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:50.631 [2024-04-24 19:40:16.061951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:50.631 [2024-04-24 19:40:16.061957] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:50.631 [2024-04-24 19:40:16.061964] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:50.631 [2024-04-24 19:40:16.061973] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:50.631 [2024-04-24 19:40:16.061985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.631 [2024-04-24 19:40:16.061992] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:50.631 [2024-04-24 19:40:16.061999] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:50.631 [2024-04-24 19:40:16.062005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:50.631 [2024-04-24 19:40:16.062012] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:50.631 [2024-04-24 19:40:16.062018] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:50.631 [2024-04-24 19:40:16.062024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:50.631 [2024-04-24 19:40:16.062031] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:50.631 [2024-04-24 19:40:16.062039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:50.631 [2024-04-24 19:40:16.062047] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:50.631 [2024-04-24 19:40:16.062054] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:50.631 [2024-04-24 19:40:16.062061] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:50.631 [2024-04-24 19:40:16.062067] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:50.631 [2024-04-24 19:40:16.062074] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:50.631 [2024-04-24 19:40:16.062080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:50.631 [2024-04-24 19:40:16.062087] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:50.631 [2024-04-24 19:40:16.062093] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:50.631 [2024-04-24 19:40:16.062100] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:50.631 [2024-04-24 19:40:16.062106] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:50.631 [2024-04-24 19:40:16.062113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:50.631 [2024-04-24 19:40:16.062119] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:50.631 [2024-04-24 19:40:16.062127] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:50.631 [2024-04-24 19:40:16.062134] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:50.631 [2024-04-24 19:40:16.062141] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:50.631 [2024-04-24 19:40:16.062149] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:50.631 [2024-04-24 19:40:16.062156] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:50.631 [2024-04-24 19:40:16.062162] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:50.631 [2024-04-24 19:40:16.062169] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:50.631 [2024-04-24 19:40:16.062177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.631 [2024-04-24 19:40:16.062184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:50.631 [2024-04-24 19:40:16.062191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:25:50.631 [2024-04-24 19:40:16.062197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.631 [2024-04-24 19:40:16.086352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.631 [2024-04-24 19:40:16.086383] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:50.631 [2024-04-24 19:40:16.086395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.162 ms 00:25:50.631 [2024-04-24 19:40:16.086402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.631 [2024-04-24 19:40:16.086482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.631 [2024-04-24 19:40:16.086493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:50.631 [2024-04-24 19:40:16.086502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:50.631 [2024-04-24 19:40:16.086509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.631 [2024-04-24 19:40:16.147471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.631 [2024-04-24 19:40:16.147506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:50.631 [2024-04-24 19:40:16.147518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.034 ms 00:25:50.631 [2024-04-24 19:40:16.147528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.631 [2024-04-24 19:40:16.147570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.631 [2024-04-24 19:40:16.147578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:50.631 [2024-04-24 19:40:16.147587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:50.631 [2024-04-24 19:40:16.147593] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.631 [2024-04-24 19:40:16.148041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.631 [2024-04-24 19:40:16.148056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:50.631 [2024-04-24 19:40:16.148064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:25:50.631 [2024-04-24 19:40:16.148070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.631 [2024-04-24 19:40:16.148168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.631 [2024-04-24 19:40:16.148179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:50.631 [2024-04-24 19:40:16.148186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:25:50.631 [2024-04-24 19:40:16.148193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.632 [2024-04-24 19:40:16.169498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.632 [2024-04-24 19:40:16.169532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:50.632 [2024-04-24 19:40:16.169542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.326 ms 00:25:50.632 [2024-04-24 19:40:16.169550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.632 [2024-04-24 19:40:16.188172] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:50.632 [2024-04-24 19:40:16.188205] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:50.632 [2024-04-24 19:40:16.188216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.632 [2024-04-24 19:40:16.188223] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:50.632 [2024-04-24 19:40:16.188233] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.562 ms 00:25:50.632 [2024-04-24 19:40:16.188239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.632 [2024-04-24 19:40:16.217388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.632 [2024-04-24 19:40:16.217421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:50.632 [2024-04-24 19:40:16.217432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.169 ms 00:25:50.632 [2024-04-24 19:40:16.217439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.632 [2024-04-24 19:40:16.235225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.632 [2024-04-24 19:40:16.235265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:50.632 [2024-04-24 19:40:16.235276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.765 ms 00:25:50.632 [2024-04-24 19:40:16.235292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.632 [2024-04-24 19:40:16.253874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.632 [2024-04-24 19:40:16.253904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:50.632 [2024-04-24 19:40:16.253924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.568 ms 00:25:50.632 [2024-04-24 19:40:16.253930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.632 [2024-04-24 19:40:16.254363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.632 [2024-04-24 19:40:16.254373] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:50.632 [2024-04-24 19:40:16.254381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:25:50.632 [2024-04-24 19:40:16.254388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.892 [2024-04-24 19:40:16.339411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.892 [2024-04-24 19:40:16.339469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:50.892 [2024-04-24 19:40:16.339482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.171 ms 00:25:50.892 [2024-04-24 19:40:16.339506] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.892 [2024-04-24 19:40:16.351145] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:50.892 [2024-04-24 19:40:16.354128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.892 [2024-04-24 19:40:16.354153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:50.892 [2024-04-24 19:40:16.354164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.588 ms 00:25:50.892 [2024-04-24 19:40:16.354171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.892 [2024-04-24 19:40:16.354273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.892 [2024-04-24 19:40:16.354286] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:50.892 [2024-04-24 19:40:16.354295] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:50.892 [2024-04-24 19:40:16.354301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.892 [2024-04-24 19:40:16.355105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.892 [2024-04-24 19:40:16.355126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:50.892 [2024-04-24 19:40:16.355135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:25:50.892 [2024-04-24 19:40:16.355141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.892 [2024-04-24 19:40:16.356701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.892 [2024-04-24 19:40:16.356726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:50.892 [2024-04-24 19:40:16.356736] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.544 ms 00:25:50.892 [2024-04-24 19:40:16.356743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.892 [2024-04-24 19:40:16.356771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.892 [2024-04-24 19:40:16.356778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:50.892 [2024-04-24 19:40:16.356786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:50.892 [2024-04-24 19:40:16.356792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.892 [2024-04-24 19:40:16.356822] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:50.892 [2024-04-24 19:40:16.356831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.892 [2024-04-24 19:40:16.356837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:50.892 [2024-04-24 19:40:16.356844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:50.892 [2024-04-24 19:40:16.356853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.892 [2024-04-24 19:40:16.392602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.892 [2024-04-24 19:40:16.392641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:50.892 [2024-04-24 19:40:16.392652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.800 ms 00:25:50.892 [2024-04-24 19:40:16.392659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.892 [2024-04-24 19:40:16.392750] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.892 [2024-04-24 19:40:16.392763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:50.892 [2024-04-24 19:40:16.392770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:50.892 [2024-04-24 19:40:16.392777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.892 [2024-04-24 19:40:16.393949] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 368.584 ms, result 0 00:26:20.572  Copying: 34/1024 [MB] (34 MBps) Copying: 69/1024 [MB] (35 MBps) Copying: 104/1024 [MB] (34 MBps) Copying: 139/1024 [MB] (34 MBps) Copying: 173/1024 [MB] (34 MBps) Copying: 207/1024 [MB] (34 MBps) Copying: 242/1024 [MB] (35 MBps) Copying: 277/1024 [MB] (34 MBps) Copying: 311/1024 [MB] (34 MBps) Copying: 346/1024 [MB] (34 MBps) Copying: 380/1024 [MB] (34 MBps) Copying: 413/1024 [MB] (33 MBps) Copying: 448/1024 [MB] (34 MBps) Copying: 483/1024 [MB] (35 MBps) Copying: 517/1024 [MB] (34 MBps) Copying: 552/1024 [MB] (34 MBps) Copying: 587/1024 [MB] (35 MBps) Copying: 622/1024 [MB] (35 MBps) Copying: 657/1024 [MB] (35 MBps) Copying: 691/1024 [MB] (34 MBps) Copying: 725/1024 [MB] (34 MBps) Copying: 760/1024 [MB] (35 MBps) Copying: 795/1024 [MB] (34 MBps) Copying: 829/1024 [MB] (34 MBps) Copying: 864/1024 [MB] (34 MBps) Copying: 896/1024 [MB] (32 MBps) Copying: 931/1024 [MB] (35 MBps) Copying: 967/1024 [MB] (35 MBps) Copying: 1001/1024 [MB] (34 MBps) Copying: 1024/1024 [MB] (average 34 MBps)[2024-04-24 19:40:46.235292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.572 [2024-04-24 19:40:46.235426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:20.572 [2024-04-24 19:40:46.235466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:20.572 [2024-04-24 19:40:46.235493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.572 [2024-04-24 19:40:46.235556] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:20.572 [2024-04-24 19:40:46.245539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.572 [2024-04-24 19:40:46.245603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:20.572 [2024-04-24 19:40:46.245626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.957 ms 00:26:20.572 [2024-04-24 19:40:46.245663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.572 [2024-04-24 19:40:46.246141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.572 [2024-04-24 19:40:46.246177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:20.572 [2024-04-24 19:40:46.246196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:26:20.572 [2024-04-24 19:40:46.246211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.831 [2024-04-24 19:40:46.250974] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.831 [2024-04-24 19:40:46.251002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:20.831 [2024-04-24 19:40:46.251015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.744 ms 00:26:20.831 [2024-04-24 19:40:46.251026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.831 [2024-04-24 19:40:46.258459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.831 [2024-04-24 19:40:46.258490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:20.831 [2024-04-24 19:40:46.258506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.426 ms 00:26:20.831 [2024-04-24 19:40:46.258513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.831 [2024-04-24 19:40:46.297021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.831 [2024-04-24 19:40:46.297058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:20.831 [2024-04-24 19:40:46.297069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.529 ms 00:26:20.831 [2024-04-24 19:40:46.297076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.831 [2024-04-24 19:40:46.318083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.831 [2024-04-24 19:40:46.318119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:20.831 [2024-04-24 19:40:46.318131] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.023 ms 00:26:20.831 [2024-04-24 19:40:46.318138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.831 [2024-04-24 19:40:46.321517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.831 [2024-04-24 19:40:46.321556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:20.831 [2024-04-24 19:40:46.321566] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.345 ms 00:26:20.831 [2024-04-24 19:40:46.321573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.831 [2024-04-24 19:40:46.359830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.831 [2024-04-24 19:40:46.359874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:20.831 [2024-04-24 19:40:46.359887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.310 ms 00:26:20.831 [2024-04-24 19:40:46.359895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.831 [2024-04-24 19:40:46.399828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.831 [2024-04-24 19:40:46.399887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:20.831 [2024-04-24 19:40:46.399900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.977 ms 00:26:20.831 [2024-04-24 19:40:46.399907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.831 [2024-04-24 19:40:46.434789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.831 [2024-04-24 19:40:46.434821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:20.831 [2024-04-24 19:40:46.434832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.907 ms 00:26:20.831 [2024-04-24 19:40:46.434838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.831 [2024-04-24 19:40:46.469879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.831 [2024-04-24 19:40:46.469909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:20.831 [2024-04-24 19:40:46.469919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.033 ms 00:26:20.831 [2024-04-24 19:40:46.469925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.831 [2024-04-24 19:40:46.469961] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:20.831 [2024-04-24 19:40:46.469974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:20.831 [2024-04-24 19:40:46.469983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:26:20.831 [2024-04-24 19:40:46.469991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.469998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:20.832 [2024-04-24 19:40:46.470611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:20.833 [2024-04-24 19:40:46.470617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:20.833 [2024-04-24 19:40:46.470624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:20.833 [2024-04-24 19:40:46.470631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:20.833 [2024-04-24 19:40:46.470637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:20.833 [2024-04-24 19:40:46.470644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:20.833 [2024-04-24 19:40:46.470663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:20.833 [2024-04-24 19:40:46.470670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:20.833 [2024-04-24 19:40:46.470677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:20.833 [2024-04-24 19:40:46.470691] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:20.833 [2024-04-24 19:40:46.470699] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9e036e91-3551-4163-9401-75fe915f2698 00:26:20.833 [2024-04-24 19:40:46.470706] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:26:20.833 [2024-04-24 19:40:46.470713] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:20.833 [2024-04-24 19:40:46.470724] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:20.833 [2024-04-24 19:40:46.470743] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:20.833 [2024-04-24 19:40:46.470750] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:20.833 [2024-04-24 19:40:46.470757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:20.833 [2024-04-24 19:40:46.470763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:20.833 [2024-04-24 19:40:46.470769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:20.833 [2024-04-24 19:40:46.470775] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:20.833 [2024-04-24 19:40:46.470782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.833 [2024-04-24 19:40:46.470788] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:20.833 [2024-04-24 19:40:46.470796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.823 ms 00:26:20.833 [2024-04-24 19:40:46.470802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.833 [2024-04-24 19:40:46.489941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.833 [2024-04-24 19:40:46.489974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:20.833 [2024-04-24 19:40:46.489983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.132 ms 00:26:20.833 [2024-04-24 19:40:46.489990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:20.833 [2024-04-24 19:40:46.490224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:20.833 [2024-04-24 19:40:46.490231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:20.833 [2024-04-24 19:40:46.490239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:26:20.833 [2024-04-24 19:40:46.490245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.092 [2024-04-24 19:40:46.542231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.092 [2024-04-24 19:40:46.542291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:21.092 [2024-04-24 19:40:46.542304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.092 [2024-04-24 19:40:46.542312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.092 [2024-04-24 19:40:46.542381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.092 [2024-04-24 19:40:46.542388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:21.092 [2024-04-24 19:40:46.542395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.092 [2024-04-24 19:40:46.542402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.092 [2024-04-24 19:40:46.542478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.092 [2024-04-24 19:40:46.542506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:21.092 [2024-04-24 19:40:46.542513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.092 [2024-04-24 19:40:46.542520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.092 [2024-04-24 19:40:46.542537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.092 [2024-04-24 19:40:46.542544] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:21.092 [2024-04-24 19:40:46.542551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.092 [2024-04-24 19:40:46.542558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.092 [2024-04-24 19:40:46.653457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.092 [2024-04-24 19:40:46.653505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:21.092 [2024-04-24 19:40:46.653517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.092 [2024-04-24 19:40:46.653540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.092 [2024-04-24 19:40:46.698144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.092 [2024-04-24 19:40:46.698189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:21.092 [2024-04-24 19:40:46.698199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.092 [2024-04-24 19:40:46.698206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.092 [2024-04-24 19:40:46.698277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.092 [2024-04-24 19:40:46.698291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:21.092 [2024-04-24 19:40:46.698298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.092 [2024-04-24 19:40:46.698304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.092 [2024-04-24 19:40:46.698332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.092 [2024-04-24 19:40:46.698339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:21.092 [2024-04-24 19:40:46.698346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.092 [2024-04-24 19:40:46.698353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.092 [2024-04-24 19:40:46.698452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.092 [2024-04-24 19:40:46.698462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:21.093 [2024-04-24 19:40:46.698472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.093 [2024-04-24 19:40:46.698480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.093 [2024-04-24 19:40:46.698509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.093 [2024-04-24 19:40:46.698518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:21.093 [2024-04-24 19:40:46.698526] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.093 [2024-04-24 19:40:46.698532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.093 [2024-04-24 19:40:46.698567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.093 [2024-04-24 19:40:46.698574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:21.093 [2024-04-24 19:40:46.698584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.093 [2024-04-24 19:40:46.698591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.093 [2024-04-24 19:40:46.698632] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:21.093 [2024-04-24 19:40:46.698640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:21.093 [2024-04-24 19:40:46.698664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:21.093 [2024-04-24 19:40:46.698671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:21.093 [2024-04-24 19:40:46.698798] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 464.435 ms, result 0 00:26:22.471 00:26:22.471 00:26:22.471 19:40:47 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:24.376 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:24.376 19:40:49 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:24.376 19:40:49 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:24.376 19:40:49 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:24.376 19:40:49 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:24.376 19:40:49 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:24.376 19:40:49 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:24.376 19:40:49 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:24.376 19:40:49 -- ftl/dirty_shutdown.sh@37 -- # killprocess 81826 00:26:24.376 Process with pid 81826 is not found 00:26:24.376 19:40:49 -- common/autotest_common.sh@936 -- # '[' -z 81826 ']' 00:26:24.376 19:40:49 -- common/autotest_common.sh@940 -- # kill -0 81826 00:26:24.376 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (81826) - No such process 00:26:24.376 19:40:49 -- common/autotest_common.sh@963 -- # echo 'Process with pid 81826 is not found' 00:26:24.376 19:40:49 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:24.635 Remove shared memory files 00:26:24.636 19:40:50 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:24.636 19:40:50 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:24.636 19:40:50 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:24.636 19:40:50 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:24.636 19:40:50 -- ftl/common.sh@207 -- # rm -f rm -f 00:26:24.636 19:40:50 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:24.636 19:40:50 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:24.636 ************************************ 00:26:24.636 END TEST ftl_dirty_shutdown 00:26:24.636 ************************************ 00:26:24.636 00:26:24.636 real 3m3.710s 00:26:24.636 user 3m31.509s 00:26:24.636 sys 0m27.987s 00:26:24.636 19:40:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:24.636 19:40:50 -- common/autotest_common.sh@10 -- # set +x 00:26:24.636 19:40:50 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:24.636 19:40:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:24.636 19:40:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:24.636 19:40:50 -- common/autotest_common.sh@10 -- # set +x 00:26:24.636 ************************************ 00:26:24.636 START TEST ftl_upgrade_shutdown 00:26:24.636 ************************************ 00:26:24.636 19:40:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:24.895 * Looking for test storage... 00:26:24.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:24.895 19:40:50 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:24.895 19:40:50 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:24.895 19:40:50 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:24.895 19:40:50 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:24.895 19:40:50 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:24.895 19:40:50 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:24.895 19:40:50 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:24.895 19:40:50 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:24.895 19:40:50 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:24.895 19:40:50 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:24.895 19:40:50 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:24.895 19:40:50 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:24.895 19:40:50 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:24.895 19:40:50 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:24.895 19:40:50 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:24.895 19:40:50 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:24.895 19:40:50 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:24.895 19:40:50 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:24.895 19:40:50 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:24.895 19:40:50 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:24.895 19:40:50 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:24.895 19:40:50 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:24.895 19:40:50 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:24.895 19:40:50 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:24.895 19:40:50 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:24.895 19:40:50 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:24.895 19:40:50 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:24.895 19:40:50 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:24.895 19:40:50 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:24.895 19:40:50 -- ftl/common.sh@81 -- # local base_bdev= 00:26:24.895 19:40:50 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:24.895 19:40:50 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:24.895 19:40:50 -- ftl/common.sh@89 -- # spdk_tgt_pid=83814 00:26:24.895 19:40:50 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:24.895 19:40:50 -- ftl/common.sh@91 -- # waitforlisten 83814 00:26:24.895 19:40:50 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:24.895 19:40:50 -- common/autotest_common.sh@817 -- # '[' -z 83814 ']' 00:26:24.895 19:40:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.895 19:40:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:24.895 19:40:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.895 19:40:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:24.895 19:40:50 -- common/autotest_common.sh@10 -- # set +x 00:26:24.895 [2024-04-24 19:40:50.494822] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:26:24.895 [2024-04-24 19:40:50.495007] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83814 ] 00:26:25.154 [2024-04-24 19:40:50.646871] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.412 [2024-04-24 19:40:50.885593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:26.368 19:40:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:26.368 19:40:51 -- common/autotest_common.sh@850 -- # return 0 00:26:26.368 19:40:51 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:26.368 19:40:51 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:26.368 19:40:51 -- ftl/common.sh@99 -- # local params 00:26:26.368 19:40:51 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:26.368 19:40:51 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:26.368 19:40:51 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:26.368 19:40:51 -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:26.368 19:40:51 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:26.368 19:40:51 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:26.368 19:40:51 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:26.368 19:40:51 -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:26.368 19:40:51 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:26.368 19:40:51 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:26.368 19:40:51 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:26.368 19:40:51 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:26.368 19:40:51 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:26.368 19:40:51 -- ftl/common.sh@54 -- # local name=base 00:26:26.368 19:40:51 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:26.368 19:40:51 -- ftl/common.sh@56 -- # local size=20480 00:26:26.368 19:40:51 -- ftl/common.sh@59 -- # local base_bdev 00:26:26.368 19:40:51 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:26.626 19:40:52 -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:26.626 19:40:52 -- ftl/common.sh@62 -- # local base_size 00:26:26.626 19:40:52 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:26.626 19:40:52 -- common/autotest_common.sh@1364 -- # local bdev_name=basen1 00:26:26.626 19:40:52 -- common/autotest_common.sh@1365 -- # local bdev_info 00:26:26.626 19:40:52 -- common/autotest_common.sh@1366 -- # local bs 00:26:26.626 19:40:52 -- common/autotest_common.sh@1367 -- # local nb 00:26:26.626 19:40:52 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:26.885 19:40:52 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:26:26.885 { 00:26:26.885 "name": "basen1", 00:26:26.885 "aliases": [ 00:26:26.885 "116e4c53-2d8c-4f84-923b-0424c0d995d9" 00:26:26.885 ], 00:26:26.885 "product_name": "NVMe disk", 00:26:26.885 "block_size": 4096, 00:26:26.885 "num_blocks": 1310720, 00:26:26.886 "uuid": "116e4c53-2d8c-4f84-923b-0424c0d995d9", 00:26:26.886 "assigned_rate_limits": { 00:26:26.886 "rw_ios_per_sec": 0, 00:26:26.886 "rw_mbytes_per_sec": 0, 00:26:26.886 "r_mbytes_per_sec": 0, 00:26:26.886 "w_mbytes_per_sec": 0 00:26:26.886 }, 00:26:26.886 "claimed": true, 00:26:26.886 "claim_type": "read_many_write_one", 00:26:26.886 "zoned": false, 00:26:26.886 "supported_io_types": { 00:26:26.886 "read": true, 00:26:26.886 "write": true, 00:26:26.886 "unmap": true, 00:26:26.886 "write_zeroes": true, 00:26:26.886 "flush": true, 00:26:26.886 "reset": true, 00:26:26.886 "compare": true, 00:26:26.886 "compare_and_write": false, 00:26:26.886 "abort": true, 00:26:26.886 "nvme_admin": true, 00:26:26.886 "nvme_io": true 00:26:26.886 }, 00:26:26.886 "driver_specific": { 00:26:26.886 "nvme": [ 00:26:26.886 { 00:26:26.886 "pci_address": "0000:00:11.0", 00:26:26.886 "trid": { 00:26:26.886 "trtype": "PCIe", 00:26:26.886 "traddr": "0000:00:11.0" 00:26:26.886 }, 00:26:26.886 "ctrlr_data": { 00:26:26.886 "cntlid": 0, 00:26:26.886 "vendor_id": "0x1b36", 00:26:26.886 "model_number": "QEMU NVMe Ctrl", 00:26:26.886 "serial_number": "12341", 00:26:26.886 "firmware_revision": "8.0.0", 00:26:26.886 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:26.886 "oacs": { 00:26:26.886 "security": 0, 00:26:26.886 "format": 1, 00:26:26.886 "firmware": 0, 00:26:26.886 "ns_manage": 1 00:26:26.886 }, 00:26:26.886 "multi_ctrlr": false, 00:26:26.886 "ana_reporting": false 00:26:26.886 }, 00:26:26.886 "vs": { 00:26:26.886 "nvme_version": "1.4" 00:26:26.886 }, 00:26:26.886 "ns_data": { 00:26:26.886 "id": 1, 00:26:26.886 "can_share": false 00:26:26.886 } 00:26:26.886 } 00:26:26.886 ], 00:26:26.886 "mp_policy": "active_passive" 00:26:26.886 } 00:26:26.886 } 00:26:26.886 ]' 00:26:26.886 19:40:52 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:26:26.886 19:40:52 -- common/autotest_common.sh@1369 -- # bs=4096 00:26:26.886 19:40:52 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:26:26.886 19:40:52 -- common/autotest_common.sh@1370 -- # nb=1310720 00:26:26.886 19:40:52 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:26:26.886 19:40:52 -- common/autotest_common.sh@1374 -- # echo 5120 00:26:26.886 19:40:52 -- ftl/common.sh@63 -- # base_size=5120 00:26:26.886 19:40:52 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:26.886 19:40:52 -- ftl/common.sh@67 -- # clear_lvols 00:26:26.886 19:40:52 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:26.886 19:40:52 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:27.145 19:40:52 -- ftl/common.sh@28 -- # stores=b7773cbf-4559-4cde-a702-b8e5ae74aadc 00:26:27.145 19:40:52 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:27.145 19:40:52 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b7773cbf-4559-4cde-a702-b8e5ae74aadc 00:26:27.145 19:40:52 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:27.404 19:40:52 -- ftl/common.sh@68 -- # lvs=c7acd7aa-9fab-4874-a389-35ab3808e732 00:26:27.404 19:40:52 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u c7acd7aa-9fab-4874-a389-35ab3808e732 00:26:27.662 19:40:53 -- ftl/common.sh@107 -- # base_bdev=32630294-ffe5-41f3-b206-1596a303e153 00:26:27.662 19:40:53 -- ftl/common.sh@108 -- # [[ -z 32630294-ffe5-41f3-b206-1596a303e153 ]] 00:26:27.663 19:40:53 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 32630294-ffe5-41f3-b206-1596a303e153 5120 00:26:27.663 19:40:53 -- ftl/common.sh@35 -- # local name=cache 00:26:27.663 19:40:53 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:27.663 19:40:53 -- ftl/common.sh@37 -- # local base_bdev=32630294-ffe5-41f3-b206-1596a303e153 00:26:27.663 19:40:53 -- ftl/common.sh@38 -- # local cache_size=5120 00:26:27.663 19:40:53 -- ftl/common.sh@41 -- # get_bdev_size 32630294-ffe5-41f3-b206-1596a303e153 00:26:27.663 19:40:53 -- common/autotest_common.sh@1364 -- # local bdev_name=32630294-ffe5-41f3-b206-1596a303e153 00:26:27.663 19:40:53 -- common/autotest_common.sh@1365 -- # local bdev_info 00:26:27.663 19:40:53 -- common/autotest_common.sh@1366 -- # local bs 00:26:27.663 19:40:53 -- common/autotest_common.sh@1367 -- # local nb 00:26:27.663 19:40:53 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 32630294-ffe5-41f3-b206-1596a303e153 00:26:27.921 19:40:53 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:26:27.921 { 00:26:27.921 "name": "32630294-ffe5-41f3-b206-1596a303e153", 00:26:27.921 "aliases": [ 00:26:27.921 "lvs/basen1p0" 00:26:27.921 ], 00:26:27.921 "product_name": "Logical Volume", 00:26:27.921 "block_size": 4096, 00:26:27.921 "num_blocks": 5242880, 00:26:27.921 "uuid": "32630294-ffe5-41f3-b206-1596a303e153", 00:26:27.921 "assigned_rate_limits": { 00:26:27.921 "rw_ios_per_sec": 0, 00:26:27.921 "rw_mbytes_per_sec": 0, 00:26:27.921 "r_mbytes_per_sec": 0, 00:26:27.921 "w_mbytes_per_sec": 0 00:26:27.921 }, 00:26:27.921 "claimed": false, 00:26:27.921 "zoned": false, 00:26:27.921 "supported_io_types": { 00:26:27.921 "read": true, 00:26:27.921 "write": true, 00:26:27.921 "unmap": true, 00:26:27.921 "write_zeroes": true, 00:26:27.921 "flush": false, 00:26:27.921 "reset": true, 00:26:27.921 "compare": false, 00:26:27.921 "compare_and_write": false, 00:26:27.921 "abort": false, 00:26:27.921 "nvme_admin": false, 00:26:27.921 "nvme_io": false 00:26:27.921 }, 00:26:27.921 "driver_specific": { 00:26:27.921 "lvol": { 00:26:27.921 "lvol_store_uuid": "c7acd7aa-9fab-4874-a389-35ab3808e732", 00:26:27.921 "base_bdev": "basen1", 00:26:27.921 "thin_provision": true, 00:26:27.921 "snapshot": false, 00:26:27.921 "clone": false, 00:26:27.921 "esnap_clone": false 00:26:27.921 } 00:26:27.921 } 00:26:27.921 } 00:26:27.921 ]' 00:26:27.921 19:40:53 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:26:27.921 19:40:53 -- common/autotest_common.sh@1369 -- # bs=4096 00:26:27.921 19:40:53 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:26:27.921 19:40:53 -- common/autotest_common.sh@1370 -- # nb=5242880 00:26:27.921 19:40:53 -- common/autotest_common.sh@1373 -- # bdev_size=20480 00:26:27.921 19:40:53 -- common/autotest_common.sh@1374 -- # echo 20480 00:26:27.921 19:40:53 -- ftl/common.sh@41 -- # local base_size=1024 00:26:27.921 19:40:53 -- ftl/common.sh@44 -- # local nvc_bdev 00:26:27.921 19:40:53 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:28.195 19:40:53 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:28.195 19:40:53 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:28.195 19:40:53 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:28.454 19:40:53 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:28.454 19:40:53 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:28.454 19:40:53 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 32630294-ffe5-41f3-b206-1596a303e153 -c cachen1p0 --l2p_dram_limit 2 00:26:28.454 [2024-04-24 19:40:54.040755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.454 [2024-04-24 19:40:54.040817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:28.454 [2024-04-24 19:40:54.040837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:28.454 [2024-04-24 19:40:54.040845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.454 [2024-04-24 19:40:54.040907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.454 [2024-04-24 19:40:54.040916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:28.454 [2024-04-24 19:40:54.040927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:26:28.454 [2024-04-24 19:40:54.040934] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.454 [2024-04-24 19:40:54.040961] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:28.454 [2024-04-24 19:40:54.042154] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:28.454 [2024-04-24 19:40:54.042182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.454 [2024-04-24 19:40:54.042191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:28.454 [2024-04-24 19:40:54.042206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.236 ms 00:26:28.454 [2024-04-24 19:40:54.042216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.454 [2024-04-24 19:40:54.042249] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID a6187774-285d-4b0c-bb0d-fc5b9b4e1d3e 00:26:28.454 [2024-04-24 19:40:54.043687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.454 [2024-04-24 19:40:54.043713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:28.454 [2024-04-24 19:40:54.043722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:28.454 [2024-04-24 19:40:54.043731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.454 [2024-04-24 19:40:54.050989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.454 [2024-04-24 19:40:54.051022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:28.454 [2024-04-24 19:40:54.051031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.186 ms 00:26:28.454 [2024-04-24 19:40:54.051055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.454 [2024-04-24 19:40:54.051099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.454 [2024-04-24 19:40:54.051112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:28.454 [2024-04-24 19:40:54.051120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:28.454 [2024-04-24 19:40:54.051128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.454 [2024-04-24 19:40:54.051191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.454 [2024-04-24 19:40:54.051205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:28.454 [2024-04-24 19:40:54.051212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:28.454 [2024-04-24 19:40:54.051221] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.454 [2024-04-24 19:40:54.051244] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:28.455 [2024-04-24 19:40:54.057172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.455 [2024-04-24 19:40:54.057228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:28.455 [2024-04-24 19:40:54.057272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.946 ms 00:26:28.455 [2024-04-24 19:40:54.057291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.455 [2024-04-24 19:40:54.057334] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.455 [2024-04-24 19:40:54.057354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:28.455 [2024-04-24 19:40:54.057388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:28.455 [2024-04-24 19:40:54.057408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.455 [2024-04-24 19:40:54.057508] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:28.455 [2024-04-24 19:40:54.057643] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:28.455 [2024-04-24 19:40:54.057689] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:28.455 [2024-04-24 19:40:54.057737] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:28.455 [2024-04-24 19:40:54.057807] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:28.455 [2024-04-24 19:40:54.057849] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:28.455 [2024-04-24 19:40:54.057898] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:28.455 [2024-04-24 19:40:54.057924] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:28.455 [2024-04-24 19:40:54.057954] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:28.455 [2024-04-24 19:40:54.057980] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:28.455 [2024-04-24 19:40:54.058013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.455 [2024-04-24 19:40:54.058037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:28.455 [2024-04-24 19:40:54.058064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.506 ms 00:26:28.455 [2024-04-24 19:40:54.058089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.455 [2024-04-24 19:40:54.058185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.455 [2024-04-24 19:40:54.058214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:28.455 [2024-04-24 19:40:54.058246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:26:28.455 [2024-04-24 19:40:54.058271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.455 [2024-04-24 19:40:54.058356] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:28.455 [2024-04-24 19:40:54.058385] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:28.455 [2024-04-24 19:40:54.058416] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:28.455 [2024-04-24 19:40:54.058443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:28.455 [2024-04-24 19:40:54.058471] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:28.455 [2024-04-24 19:40:54.058480] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:28.455 [2024-04-24 19:40:54.058488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:28.455 [2024-04-24 19:40:54.058494] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:28.455 [2024-04-24 19:40:54.058502] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:28.455 [2024-04-24 19:40:54.058508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:28.455 [2024-04-24 19:40:54.058516] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:28.455 [2024-04-24 19:40:54.058522] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:28.455 [2024-04-24 19:40:54.058530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:28.455 [2024-04-24 19:40:54.058536] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:28.455 [2024-04-24 19:40:54.058545] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:28.455 [2024-04-24 19:40:54.058551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:28.455 [2024-04-24 19:40:54.058558] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:28.455 [2024-04-24 19:40:54.058565] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:28.455 [2024-04-24 19:40:54.058575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:28.455 [2024-04-24 19:40:54.058581] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:28.455 [2024-04-24 19:40:54.058589] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:28.455 [2024-04-24 19:40:54.058596] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:28.455 [2024-04-24 19:40:54.058604] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:28.455 [2024-04-24 19:40:54.058610] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:28.455 [2024-04-24 19:40:54.058618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:28.455 [2024-04-24 19:40:54.058624] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:28.455 [2024-04-24 19:40:54.058643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:28.455 [2024-04-24 19:40:54.058651] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:28.455 [2024-04-24 19:40:54.058658] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:28.455 [2024-04-24 19:40:54.058665] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:28.455 [2024-04-24 19:40:54.058672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:28.455 [2024-04-24 19:40:54.058678] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:28.455 [2024-04-24 19:40:54.058686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:28.455 [2024-04-24 19:40:54.058692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:28.455 [2024-04-24 19:40:54.058702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:28.455 [2024-04-24 19:40:54.058708] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:28.455 [2024-04-24 19:40:54.058716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:28.455 [2024-04-24 19:40:54.058722] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:28.455 [2024-04-24 19:40:54.058730] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:28.455 [2024-04-24 19:40:54.058736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:28.455 [2024-04-24 19:40:54.058744] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:28.455 [2024-04-24 19:40:54.058751] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:28.455 [2024-04-24 19:40:54.058759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:28.455 [2024-04-24 19:40:54.058769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:28.455 [2024-04-24 19:40:54.058778] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:28.455 [2024-04-24 19:40:54.058784] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:28.455 [2024-04-24 19:40:54.058792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:28.455 [2024-04-24 19:40:54.058798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:28.455 [2024-04-24 19:40:54.058806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:28.455 [2024-04-24 19:40:54.058813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:28.455 [2024-04-24 19:40:54.058824] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:28.455 [2024-04-24 19:40:54.058833] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:28.455 [2024-04-24 19:40:54.058843] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:28.455 [2024-04-24 19:40:54.058850] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:28.455 [2024-04-24 19:40:54.058858] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:28.455 [2024-04-24 19:40:54.058865] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:28.455 [2024-04-24 19:40:54.058874] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:28.455 [2024-04-24 19:40:54.058881] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:28.455 [2024-04-24 19:40:54.058889] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:28.455 [2024-04-24 19:40:54.058896] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:28.455 [2024-04-24 19:40:54.058904] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:28.455 [2024-04-24 19:40:54.058911] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:28.455 [2024-04-24 19:40:54.058918] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:28.455 [2024-04-24 19:40:54.058925] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:28.455 [2024-04-24 19:40:54.058933] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:28.455 [2024-04-24 19:40:54.058940] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:28.455 [2024-04-24 19:40:54.058952] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:28.455 [2024-04-24 19:40:54.058959] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:28.455 [2024-04-24 19:40:54.058967] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:28.455 [2024-04-24 19:40:54.058975] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:28.455 [2024-04-24 19:40:54.058983] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:28.455 [2024-04-24 19:40:54.058991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.455 [2024-04-24 19:40:54.058999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:28.456 [2024-04-24 19:40:54.059006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.673 ms 00:26:28.456 [2024-04-24 19:40:54.059015] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.456 [2024-04-24 19:40:54.081736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.456 [2024-04-24 19:40:54.081812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:28.456 [2024-04-24 19:40:54.081842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.724 ms 00:26:28.456 [2024-04-24 19:40:54.081863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.456 [2024-04-24 19:40:54.081916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.456 [2024-04-24 19:40:54.081938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:28.456 [2024-04-24 19:40:54.081957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:28.456 [2024-04-24 19:40:54.081977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.714 [2024-04-24 19:40:54.132166] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.714 [2024-04-24 19:40:54.132272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:28.714 [2024-04-24 19:40:54.132304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 50.226 ms 00:26:28.714 [2024-04-24 19:40:54.132328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.714 [2024-04-24 19:40:54.132380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.714 [2024-04-24 19:40:54.132403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:28.714 [2024-04-24 19:40:54.132426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:28.714 [2024-04-24 19:40:54.132468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.714 [2024-04-24 19:40:54.132951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.714 [2024-04-24 19:40:54.133000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:28.714 [2024-04-24 19:40:54.133030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.419 ms 00:26:28.714 [2024-04-24 19:40:54.133053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.714 [2024-04-24 19:40:54.133121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.714 [2024-04-24 19:40:54.133159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:28.714 [2024-04-24 19:40:54.133189] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:26:28.714 [2024-04-24 19:40:54.133222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.714 [2024-04-24 19:40:54.156237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.714 [2024-04-24 19:40:54.156365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:28.714 [2024-04-24 19:40:54.156418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.012 ms 00:26:28.714 [2024-04-24 19:40:54.156453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.714 [2024-04-24 19:40:54.169701] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:28.714 [2024-04-24 19:40:54.170759] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.714 [2024-04-24 19:40:54.170808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:28.714 [2024-04-24 19:40:54.170842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.164 ms 00:26:28.714 [2024-04-24 19:40:54.170862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.714 [2024-04-24 19:40:54.203322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:28.714 [2024-04-24 19:40:54.203441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:28.714 [2024-04-24 19:40:54.203475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 32.443 ms 00:26:28.714 [2024-04-24 19:40:54.203495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:28.714 [2024-04-24 19:40:54.203537] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:26:28.714 [2024-04-24 19:40:54.203567] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:26:32.030 [2024-04-24 19:40:57.612283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.030 [2024-04-24 19:40:57.612422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:32.030 [2024-04-24 19:40:57.612459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3415.318 ms 00:26:32.030 [2024-04-24 19:40:57.612480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.030 [2024-04-24 19:40:57.612616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.030 [2024-04-24 19:40:57.612676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:32.030 [2024-04-24 19:40:57.612714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.073 ms 00:26:32.030 [2024-04-24 19:40:57.612735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.030 [2024-04-24 19:40:57.649360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.030 [2024-04-24 19:40:57.649443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:32.030 [2024-04-24 19:40:57.649474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 36.623 ms 00:26:32.030 [2024-04-24 19:40:57.649512] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.030 [2024-04-24 19:40:57.687649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.030 [2024-04-24 19:40:57.687758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:32.030 [2024-04-24 19:40:57.687777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 38.157 ms 00:26:32.030 [2024-04-24 19:40:57.687785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.030 [2024-04-24 19:40:57.688316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.030 [2024-04-24 19:40:57.688331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:32.030 [2024-04-24 19:40:57.688348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.486 ms 00:26:32.030 [2024-04-24 19:40:57.688356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.289 [2024-04-24 19:40:57.781531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.289 [2024-04-24 19:40:57.781588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:32.289 [2024-04-24 19:40:57.781604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 93.296 ms 00:26:32.289 [2024-04-24 19:40:57.781612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.289 [2024-04-24 19:40:57.819546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.289 [2024-04-24 19:40:57.819596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:32.289 [2024-04-24 19:40:57.819611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 37.964 ms 00:26:32.289 [2024-04-24 19:40:57.819622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.289 [2024-04-24 19:40:57.821416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.289 [2024-04-24 19:40:57.821441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:32.289 [2024-04-24 19:40:57.821452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.758 ms 00:26:32.289 [2024-04-24 19:40:57.821467] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.289 [2024-04-24 19:40:57.858873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.289 [2024-04-24 19:40:57.858918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:32.289 [2024-04-24 19:40:57.858933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 37.423 ms 00:26:32.289 [2024-04-24 19:40:57.858940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.289 [2024-04-24 19:40:57.858973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.289 [2024-04-24 19:40:57.858981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:32.289 [2024-04-24 19:40:57.858994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:32.289 [2024-04-24 19:40:57.859001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.289 [2024-04-24 19:40:57.859086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:32.289 [2024-04-24 19:40:57.859095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:32.289 [2024-04-24 19:40:57.859104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:26:32.289 [2024-04-24 19:40:57.859111] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:32.289 [2024-04-24 19:40:57.860244] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3826.347 ms, result 0 00:26:32.289 { 00:26:32.289 "name": "ftl", 00:26:32.289 "uuid": "a6187774-285d-4b0c-bb0d-fc5b9b4e1d3e" 00:26:32.289 } 00:26:32.289 19:40:57 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:32.548 [2024-04-24 19:40:58.067035] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.548 19:40:58 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:32.807 19:40:58 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:26:32.807 [2024-04-24 19:40:58.430670] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:32.807 19:40:58 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:26:33.066 [2024-04-24 19:40:58.597045] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:33.066 19:40:58 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:26:33.326 Fill FTL, iteration 1 00:26:33.326 19:40:58 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:26:33.326 19:40:58 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:26:33.326 19:40:58 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:26:33.326 19:40:58 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:26:33.326 19:40:58 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:26:33.326 19:40:58 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:26:33.326 19:40:58 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:26:33.326 19:40:58 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:26:33.326 19:40:58 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:26:33.326 19:40:58 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:33.326 19:40:58 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:26:33.326 19:40:58 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:33.326 19:40:58 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:33.326 19:40:58 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:33.326 19:40:58 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:33.326 19:40:58 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:26:33.326 19:40:58 -- ftl/common.sh@163 -- # spdk_ini_pid=83939 00:26:33.326 19:40:58 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:26:33.326 19:40:58 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:26:33.326 19:40:58 -- ftl/common.sh@165 -- # waitforlisten 83939 /var/tmp/spdk.tgt.sock 00:26:33.326 19:40:58 -- common/autotest_common.sh@817 -- # '[' -z 83939 ']' 00:26:33.326 19:40:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:26:33.326 19:40:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:33.326 19:40:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:26:33.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:26:33.326 19:40:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:33.326 19:40:58 -- common/autotest_common.sh@10 -- # set +x 00:26:33.584 [2024-04-24 19:40:59.022864] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:26:33.584 [2024-04-24 19:40:59.023047] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83939 ] 00:26:33.584 [2024-04-24 19:40:59.185446] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.842 [2024-04-24 19:40:59.423843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:34.791 19:41:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:34.791 19:41:00 -- common/autotest_common.sh@850 -- # return 0 00:26:34.792 19:41:00 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:26:35.049 ftln1 00:26:35.049 19:41:00 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:26:35.049 19:41:00 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:26:35.307 19:41:00 -- ftl/common.sh@173 -- # echo ']}' 00:26:35.307 19:41:00 -- ftl/common.sh@176 -- # killprocess 83939 00:26:35.307 19:41:00 -- common/autotest_common.sh@936 -- # '[' -z 83939 ']' 00:26:35.307 19:41:00 -- common/autotest_common.sh@940 -- # kill -0 83939 00:26:35.307 19:41:00 -- common/autotest_common.sh@941 -- # uname 00:26:35.307 19:41:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:35.307 19:41:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83939 00:26:35.307 killing process with pid 83939 00:26:35.307 19:41:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:35.307 19:41:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:35.307 19:41:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83939' 00:26:35.307 19:41:00 -- common/autotest_common.sh@955 -- # kill 83939 00:26:35.307 19:41:00 -- common/autotest_common.sh@960 -- # wait 83939 00:26:37.842 19:41:03 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:26:37.842 19:41:03 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:26:37.842 [2024-04-24 19:41:03.342659] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:26:37.842 [2024-04-24 19:41:03.342809] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83999 ] 00:26:37.842 [2024-04-24 19:41:03.503685] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.100 [2024-04-24 19:41:03.738912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.511  Copying: 242/1024 [MB] (242 MBps) Copying: 476/1024 [MB] (234 MBps) Copying: 703/1024 [MB] (227 MBps) Copying: 929/1024 [MB] (226 MBps) Copying: 1024/1024 [MB] (average 232 MBps) 00:26:44.511 00:26:44.511 Calculate MD5 checksum, iteration 1 00:26:44.511 19:41:10 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:26:44.511 19:41:10 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:26:44.511 19:41:10 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:44.511 19:41:10 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:44.511 19:41:10 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:44.511 19:41:10 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:44.511 19:41:10 -- ftl/common.sh@154 -- # return 0 00:26:44.512 19:41:10 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:44.770 [2024-04-24 19:41:10.232545] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:26:44.770 [2024-04-24 19:41:10.232814] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84067 ] 00:26:44.770 [2024-04-24 19:41:10.399111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.028 [2024-04-24 19:41:10.676621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.569  Copying: 655/1024 [MB] (655 MBps) Copying: 1024/1024 [MB] (average 634 MBps) 00:26:48.569 00:26:48.569 19:41:13 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:26:48.569 19:41:13 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:50.506 19:41:15 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:50.506 19:41:15 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=038fd43b4d803b4adabd3177499e4c35 00:26:50.506 19:41:15 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:50.506 19:41:15 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:50.506 19:41:15 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:26:50.506 Fill FTL, iteration 2 00:26:50.506 19:41:15 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:50.506 19:41:15 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:50.506 19:41:15 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:50.506 19:41:15 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:50.506 19:41:15 -- ftl/common.sh@154 -- # return 0 00:26:50.506 19:41:15 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:26:50.506 [2024-04-24 19:41:15.816587] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:26:50.506 [2024-04-24 19:41:15.816780] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84138 ] 00:26:50.506 [2024-04-24 19:41:15.983552] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.772 [2024-04-24 19:41:16.226476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:57.351  Copying: 225/1024 [MB] (225 MBps) Copying: 445/1024 [MB] (220 MBps) Copying: 650/1024 [MB] (205 MBps) Copying: 887/1024 [MB] (237 MBps) Copying: 1024/1024 [MB] (average 218 MBps) 00:26:57.351 00:26:57.351 Calculate MD5 checksum, iteration 2 00:26:57.351 19:41:22 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:26:57.351 19:41:22 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:26:57.351 19:41:22 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:57.351 19:41:22 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:57.351 19:41:22 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:57.351 19:41:22 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:57.351 19:41:22 -- ftl/common.sh@154 -- # return 0 00:26:57.351 19:41:22 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:57.351 [2024-04-24 19:41:22.805507] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:26:57.352 [2024-04-24 19:41:22.805622] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84209 ] 00:26:57.352 [2024-04-24 19:41:22.968132] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.610 [2024-04-24 19:41:23.215003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.065  Copying: 649/1024 [MB] (649 MBps) Copying: 1024/1024 [MB] (average 650 MBps) 00:27:05.065 00:27:05.066 19:41:30 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:05.066 19:41:30 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:06.443 19:41:32 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:06.443 19:41:32 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=601ff13618ad0917b6b4183be980343b 00:27:06.443 19:41:32 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:06.443 19:41:32 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:06.443 19:41:32 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:06.703 [2024-04-24 19:41:32.225845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.703 [2024-04-24 19:41:32.225912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:06.703 [2024-04-24 19:41:32.225927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:06.703 [2024-04-24 19:41:32.225950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.703 [2024-04-24 19:41:32.225979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.703 [2024-04-24 19:41:32.225990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:06.703 [2024-04-24 19:41:32.225997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:06.703 [2024-04-24 19:41:32.226004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.703 [2024-04-24 19:41:32.226033] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.703 [2024-04-24 19:41:32.226041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:06.703 [2024-04-24 19:41:32.226049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:06.703 [2024-04-24 19:41:32.226055] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.703 [2024-04-24 19:41:32.226124] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.269 ms, result 0 00:27:06.703 true 00:27:06.703 19:41:32 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:06.963 { 00:27:06.963 "name": "ftl", 00:27:06.963 "properties": [ 00:27:06.963 { 00:27:06.963 "name": "superblock_version", 00:27:06.963 "value": 5, 00:27:06.963 "read-only": true 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "name": "base_device", 00:27:06.963 "bands": [ 00:27:06.963 { 00:27:06.963 "id": 0, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 1, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 2, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 3, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 4, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 5, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 6, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 7, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 8, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 9, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 10, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 11, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 12, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 13, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 14, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 15, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 16, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 17, 00:27:06.963 "state": "FREE", 00:27:06.963 "validity": 0.0 00:27:06.963 } 00:27:06.963 ], 00:27:06.963 "read-only": true 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "name": "cache_device", 00:27:06.963 "type": "bdev", 00:27:06.963 "chunks": [ 00:27:06.963 { 00:27:06.963 "id": 0, 00:27:06.963 "state": "CLOSED", 00:27:06.963 "utilization": 1.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 1, 00:27:06.963 "state": "CLOSED", 00:27:06.963 "utilization": 1.0 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 2, 00:27:06.963 "state": "OPEN", 00:27:06.963 "utilization": 0.001953125 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "id": 3, 00:27:06.963 "state": "OPEN", 00:27:06.963 "utilization": 0.0 00:27:06.963 } 00:27:06.963 ], 00:27:06.963 "read-only": true 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "name": "verbose_mode", 00:27:06.963 "value": true, 00:27:06.963 "unit": "", 00:27:06.963 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:06.963 }, 00:27:06.963 { 00:27:06.963 "name": "prep_upgrade_on_shutdown", 00:27:06.963 "value": false, 00:27:06.963 "unit": "", 00:27:06.963 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:06.963 } 00:27:06.963 ] 00:27:06.963 } 00:27:06.963 19:41:32 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:06.963 [2024-04-24 19:41:32.621376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.963 [2024-04-24 19:41:32.621515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:06.963 [2024-04-24 19:41:32.621549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:06.963 [2024-04-24 19:41:32.621569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.963 [2024-04-24 19:41:32.621610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.963 [2024-04-24 19:41:32.621630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:06.963 [2024-04-24 19:41:32.621659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:06.963 [2024-04-24 19:41:32.621676] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.963 [2024-04-24 19:41:32.621722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:06.963 [2024-04-24 19:41:32.621743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:06.963 [2024-04-24 19:41:32.621783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:06.963 [2024-04-24 19:41:32.621802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:06.963 [2024-04-24 19:41:32.621876] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.495 ms, result 0 00:27:06.963 true 00:27:07.229 19:41:32 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:07.229 19:41:32 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:07.229 19:41:32 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:07.229 19:41:32 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:07.229 19:41:32 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:07.229 19:41:32 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:07.501 [2024-04-24 19:41:33.012912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.501 [2024-04-24 19:41:33.013028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:07.501 [2024-04-24 19:41:33.013073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:07.501 [2024-04-24 19:41:33.013093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.501 [2024-04-24 19:41:33.013132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.501 [2024-04-24 19:41:33.013153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:07.501 [2024-04-24 19:41:33.013171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:07.501 [2024-04-24 19:41:33.013189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.501 [2024-04-24 19:41:33.013217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:07.501 [2024-04-24 19:41:33.013235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:07.501 [2024-04-24 19:41:33.013254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:07.501 [2024-04-24 19:41:33.013280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:07.501 [2024-04-24 19:41:33.013352] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.432 ms, result 0 00:27:07.501 true 00:27:07.501 19:41:33 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:07.760 { 00:27:07.761 "name": "ftl", 00:27:07.761 "properties": [ 00:27:07.761 { 00:27:07.761 "name": "superblock_version", 00:27:07.761 "value": 5, 00:27:07.761 "read-only": true 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "name": "base_device", 00:27:07.761 "bands": [ 00:27:07.761 { 00:27:07.761 "id": 0, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 1, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 2, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 3, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 4, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 5, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 6, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 7, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 8, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 9, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 10, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 11, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 12, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 13, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 14, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 15, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 16, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 17, 00:27:07.761 "state": "FREE", 00:27:07.761 "validity": 0.0 00:27:07.761 } 00:27:07.761 ], 00:27:07.761 "read-only": true 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "name": "cache_device", 00:27:07.761 "type": "bdev", 00:27:07.761 "chunks": [ 00:27:07.761 { 00:27:07.761 "id": 0, 00:27:07.761 "state": "CLOSED", 00:27:07.761 "utilization": 1.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 1, 00:27:07.761 "state": "CLOSED", 00:27:07.761 "utilization": 1.0 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 2, 00:27:07.761 "state": "OPEN", 00:27:07.761 "utilization": 0.001953125 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "id": 3, 00:27:07.761 "state": "OPEN", 00:27:07.761 "utilization": 0.0 00:27:07.761 } 00:27:07.761 ], 00:27:07.761 "read-only": true 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "name": "verbose_mode", 00:27:07.761 "value": true, 00:27:07.761 "unit": "", 00:27:07.761 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:07.761 }, 00:27:07.761 { 00:27:07.761 "name": "prep_upgrade_on_shutdown", 00:27:07.761 "value": true, 00:27:07.761 "unit": "", 00:27:07.761 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:07.761 } 00:27:07.761 ] 00:27:07.761 } 00:27:07.761 19:41:33 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:07.761 19:41:33 -- ftl/common.sh@130 -- # [[ -n 83814 ]] 00:27:07.761 19:41:33 -- ftl/common.sh@131 -- # killprocess 83814 00:27:07.761 19:41:33 -- common/autotest_common.sh@936 -- # '[' -z 83814 ']' 00:27:07.761 19:41:33 -- common/autotest_common.sh@940 -- # kill -0 83814 00:27:07.761 19:41:33 -- common/autotest_common.sh@941 -- # uname 00:27:07.761 19:41:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:07.761 19:41:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83814 00:27:07.761 killing process with pid 83814 00:27:07.761 19:41:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:07.761 19:41:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:07.761 19:41:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83814' 00:27:07.761 19:41:33 -- common/autotest_common.sh@955 -- # kill 83814 00:27:07.761 19:41:33 -- common/autotest_common.sh@960 -- # wait 83814 00:27:08.701 [2024-04-24 19:41:34.368322] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:08.960 [2024-04-24 19:41:34.390041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.960 [2024-04-24 19:41:34.390089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:08.960 [2024-04-24 19:41:34.390103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:08.960 [2024-04-24 19:41:34.390127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:08.960 [2024-04-24 19:41:34.390147] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:08.960 [2024-04-24 19:41:34.393988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:08.960 [2024-04-24 19:41:34.394018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:08.960 [2024-04-24 19:41:34.394028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.835 ms 00:27:08.960 [2024-04-24 19:41:34.394035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:41.837309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-04-24 19:41:41.837378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:17.089 [2024-04-24 19:41:41.837393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7457.606 ms 00:27:17.089 [2024-04-24 19:41:41.837417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:41.838557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-04-24 19:41:41.838590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:17.089 [2024-04-24 19:41:41.838601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.120 ms 00:27:17.089 [2024-04-24 19:41:41.838609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:41.839698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-04-24 19:41:41.839721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:17.089 [2024-04-24 19:41:41.839731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.052 ms 00:27:17.089 [2024-04-24 19:41:41.839739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:41.856020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-04-24 19:41:41.856068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:17.089 [2024-04-24 19:41:41.856079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.250 ms 00:27:17.089 [2024-04-24 19:41:41.856086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:41.866212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-04-24 19:41:41.866260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:17.089 [2024-04-24 19:41:41.866271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.110 ms 00:27:17.089 [2024-04-24 19:41:41.866278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:41.866359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-04-24 19:41:41.866369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:17.089 [2024-04-24 19:41:41.866378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:27:17.089 [2024-04-24 19:41:41.866385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:41.882056] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-04-24 19:41:41.882091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:17.089 [2024-04-24 19:41:41.882103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.680 ms 00:27:17.089 [2024-04-24 19:41:41.882110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:41.898595] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-04-24 19:41:41.898640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:17.089 [2024-04-24 19:41:41.898651] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.485 ms 00:27:17.089 [2024-04-24 19:41:41.898658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:41.913978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-04-24 19:41:41.914010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:17.089 [2024-04-24 19:41:41.914020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.317 ms 00:27:17.089 [2024-04-24 19:41:41.914027] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:41.929045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-04-24 19:41:41.929079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:17.089 [2024-04-24 19:41:41.929089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.984 ms 00:27:17.089 [2024-04-24 19:41:41.929095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:41.929125] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:17.089 [2024-04-24 19:41:41.929141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:17.089 [2024-04-24 19:41:41.929151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:17.089 [2024-04-24 19:41:41.929159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:17.089 [2024-04-24 19:41:41.929167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:17.089 [2024-04-24 19:41:41.929296] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:17.089 [2024-04-24 19:41:41.929303] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a6187774-285d-4b0c-bb0d-fc5b9b4e1d3e 00:27:17.089 [2024-04-24 19:41:41.929312] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:17.089 [2024-04-24 19:41:41.929319] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:17.089 [2024-04-24 19:41:41.929326] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:17.089 [2024-04-24 19:41:41.929334] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:17.089 [2024-04-24 19:41:41.929341] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:17.089 [2024-04-24 19:41:41.929349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:17.089 [2024-04-24 19:41:41.929356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:17.089 [2024-04-24 19:41:41.929364] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:17.089 [2024-04-24 19:41:41.929371] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:17.089 [2024-04-24 19:41:41.929380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-04-24 19:41:41.929392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:17.089 [2024-04-24 19:41:41.929407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.256 ms 00:27:17.089 [2024-04-24 19:41:41.929415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:41.949681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-04-24 19:41:41.949719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:17.089 [2024-04-24 19:41:41.949729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.286 ms 00:27:17.089 [2024-04-24 19:41:41.949737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:41.950016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:17.089 [2024-04-24 19:41:41.950027] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:17.089 [2024-04-24 19:41:41.950034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.258 ms 00:27:17.089 [2024-04-24 19:41:41.950042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:42.019972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.089 [2024-04-24 19:41:42.020032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:17.089 [2024-04-24 19:41:42.020045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.089 [2024-04-24 19:41:42.020053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:42.020105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.089 [2024-04-24 19:41:42.020113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:17.089 [2024-04-24 19:41:42.020120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.089 [2024-04-24 19:41:42.020128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:42.020214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.089 [2024-04-24 19:41:42.020225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:17.089 [2024-04-24 19:41:42.020234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.089 [2024-04-24 19:41:42.020241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:42.020260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.089 [2024-04-24 19:41:42.020272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:17.089 [2024-04-24 19:41:42.020279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.089 [2024-04-24 19:41:42.020286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.089 [2024-04-24 19:41:42.145878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.089 [2024-04-24 19:41:42.145945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:17.089 [2024-04-24 19:41:42.145958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.090 [2024-04-24 19:41:42.145981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.090 [2024-04-24 19:41:42.190446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.090 [2024-04-24 19:41:42.190500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:17.090 [2024-04-24 19:41:42.190513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.090 [2024-04-24 19:41:42.190521] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.090 [2024-04-24 19:41:42.190602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.090 [2024-04-24 19:41:42.190610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:17.090 [2024-04-24 19:41:42.190617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.090 [2024-04-24 19:41:42.190624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.090 [2024-04-24 19:41:42.190671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.090 [2024-04-24 19:41:42.190699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:17.090 [2024-04-24 19:41:42.190712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.090 [2024-04-24 19:41:42.190719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.090 [2024-04-24 19:41:42.190815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.090 [2024-04-24 19:41:42.190826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:17.090 [2024-04-24 19:41:42.190833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.090 [2024-04-24 19:41:42.190841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.090 [2024-04-24 19:41:42.190873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.090 [2024-04-24 19:41:42.190889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:17.090 [2024-04-24 19:41:42.190896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.090 [2024-04-24 19:41:42.190907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.090 [2024-04-24 19:41:42.190944] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.090 [2024-04-24 19:41:42.190952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:17.090 [2024-04-24 19:41:42.190960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.090 [2024-04-24 19:41:42.190967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.090 [2024-04-24 19:41:42.191011] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:17.090 [2024-04-24 19:41:42.191020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:17.090 [2024-04-24 19:41:42.191030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:17.090 [2024-04-24 19:41:42.191038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:17.090 [2024-04-24 19:41:42.191150] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7816.143 ms, result 0 00:27:22.371 19:41:47 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:22.371 19:41:47 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:22.371 19:41:47 -- ftl/common.sh@81 -- # local base_bdev= 00:27:22.371 19:41:47 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:22.371 19:41:47 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:22.371 19:41:47 -- ftl/common.sh@89 -- # spdk_tgt_pid=84468 00:27:22.371 19:41:47 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:22.371 19:41:47 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:22.371 19:41:47 -- ftl/common.sh@91 -- # waitforlisten 84468 00:27:22.371 19:41:47 -- common/autotest_common.sh@817 -- # '[' -z 84468 ']' 00:27:22.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.371 19:41:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.371 19:41:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:22.371 19:41:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.371 19:41:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:22.371 19:41:47 -- common/autotest_common.sh@10 -- # set +x 00:27:22.371 [2024-04-24 19:41:47.350325] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:27:22.371 [2024-04-24 19:41:47.350536] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84468 ] 00:27:22.371 [2024-04-24 19:41:47.518465] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.371 [2024-04-24 19:41:47.757737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.312 [2024-04-24 19:41:48.721403] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:23.312 [2024-04-24 19:41:48.721549] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:23.312 [2024-04-24 19:41:48.860265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.312 [2024-04-24 19:41:48.860420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:23.312 [2024-04-24 19:41:48.860475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:23.312 [2024-04-24 19:41:48.860514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.312 [2024-04-24 19:41:48.860591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.312 [2024-04-24 19:41:48.860619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:23.312 [2024-04-24 19:41:48.860713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:27:23.312 [2024-04-24 19:41:48.860761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.312 [2024-04-24 19:41:48.860816] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:23.312 [2024-04-24 19:41:48.862038] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:23.312 [2024-04-24 19:41:48.862116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.312 [2024-04-24 19:41:48.862175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:23.312 [2024-04-24 19:41:48.862208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.321 ms 00:27:23.312 [2024-04-24 19:41:48.862243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.312 [2024-04-24 19:41:48.863679] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:23.312 [2024-04-24 19:41:48.884060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.312 [2024-04-24 19:41:48.884137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:23.312 [2024-04-24 19:41:48.884167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.422 ms 00:27:23.312 [2024-04-24 19:41:48.884206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.312 [2024-04-24 19:41:48.884288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.312 [2024-04-24 19:41:48.884316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:23.312 [2024-04-24 19:41:48.884380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:23.312 [2024-04-24 19:41:48.884414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.312 [2024-04-24 19:41:48.891146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.312 [2024-04-24 19:41:48.891213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:23.312 [2024-04-24 19:41:48.891245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.621 ms 00:27:23.312 [2024-04-24 19:41:48.891270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.312 [2024-04-24 19:41:48.891356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.312 [2024-04-24 19:41:48.891396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:23.312 [2024-04-24 19:41:48.891429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:23.312 [2024-04-24 19:41:48.891456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.312 [2024-04-24 19:41:48.891518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.312 [2024-04-24 19:41:48.891572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:23.312 [2024-04-24 19:41:48.891605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:23.312 [2024-04-24 19:41:48.891658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.312 [2024-04-24 19:41:48.891715] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:23.312 [2024-04-24 19:41:48.897429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.312 [2024-04-24 19:41:48.897492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:23.312 [2024-04-24 19:41:48.897524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.734 ms 00:27:23.312 [2024-04-24 19:41:48.897543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.312 [2024-04-24 19:41:48.897607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.312 [2024-04-24 19:41:48.897628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:23.312 [2024-04-24 19:41:48.897677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:23.312 [2024-04-24 19:41:48.897703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.312 [2024-04-24 19:41:48.897759] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:23.312 [2024-04-24 19:41:48.897816] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:23.312 [2024-04-24 19:41:48.897855] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:23.312 [2024-04-24 19:41:48.897873] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:23.312 [2024-04-24 19:41:48.897935] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:23.312 [2024-04-24 19:41:48.897947] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:23.312 [2024-04-24 19:41:48.897956] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:23.312 [2024-04-24 19:41:48.897967] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:23.312 [2024-04-24 19:41:48.897975] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:23.312 [2024-04-24 19:41:48.897983] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:23.312 [2024-04-24 19:41:48.897990] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:23.312 [2024-04-24 19:41:48.897997] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:23.312 [2024-04-24 19:41:48.898004] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:23.312 [2024-04-24 19:41:48.898012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.312 [2024-04-24 19:41:48.898019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:23.312 [2024-04-24 19:41:48.898026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.257 ms 00:27:23.312 [2024-04-24 19:41:48.898035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.312 [2024-04-24 19:41:48.898095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.312 [2024-04-24 19:41:48.898104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:23.312 [2024-04-24 19:41:48.898111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:23.312 [2024-04-24 19:41:48.898118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.312 [2024-04-24 19:41:48.898183] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:23.312 [2024-04-24 19:41:48.898193] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:23.312 [2024-04-24 19:41:48.898201] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:23.312 [2024-04-24 19:41:48.898208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.312 [2024-04-24 19:41:48.898218] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:23.312 [2024-04-24 19:41:48.898225] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:23.312 [2024-04-24 19:41:48.898232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:23.312 [2024-04-24 19:41:48.898238] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:23.312 [2024-04-24 19:41:48.898245] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:23.312 [2024-04-24 19:41:48.898251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.312 [2024-04-24 19:41:48.898258] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:23.312 [2024-04-24 19:41:48.898264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:23.312 [2024-04-24 19:41:48.898270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.312 [2024-04-24 19:41:48.898276] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:23.312 [2024-04-24 19:41:48.898282] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:23.312 [2024-04-24 19:41:48.898288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.312 [2024-04-24 19:41:48.898295] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:23.312 [2024-04-24 19:41:48.898301] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:23.312 [2024-04-24 19:41:48.898307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.312 [2024-04-24 19:41:48.898313] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:23.312 [2024-04-24 19:41:48.898319] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:23.312 [2024-04-24 19:41:48.898325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:23.312 [2024-04-24 19:41:48.898330] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:23.312 [2024-04-24 19:41:48.898337] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:23.312 [2024-04-24 19:41:48.898343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:23.312 [2024-04-24 19:41:48.898348] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:23.312 [2024-04-24 19:41:48.898355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:23.312 [2024-04-24 19:41:48.898361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:23.312 [2024-04-24 19:41:48.898367] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:23.312 [2024-04-24 19:41:48.898373] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:23.312 [2024-04-24 19:41:48.898378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:23.312 [2024-04-24 19:41:48.898384] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:23.312 [2024-04-24 19:41:48.898390] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:23.312 [2024-04-24 19:41:48.898396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:23.312 [2024-04-24 19:41:48.898403] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:23.312 [2024-04-24 19:41:48.898409] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:23.312 [2024-04-24 19:41:48.898415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.313 [2024-04-24 19:41:48.898421] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:23.313 [2024-04-24 19:41:48.898427] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:23.313 [2024-04-24 19:41:48.898433] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.313 [2024-04-24 19:41:48.898439] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:23.313 [2024-04-24 19:41:48.898446] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:23.313 [2024-04-24 19:41:48.898453] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:23.313 [2024-04-24 19:41:48.898459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:23.313 [2024-04-24 19:41:48.898466] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:23.313 [2024-04-24 19:41:48.898473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:23.313 [2024-04-24 19:41:48.898479] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:23.313 [2024-04-24 19:41:48.898485] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:23.313 [2024-04-24 19:41:48.898491] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:23.313 [2024-04-24 19:41:48.898507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:23.313 [2024-04-24 19:41:48.898515] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:23.313 [2024-04-24 19:41:48.898524] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:23.313 [2024-04-24 19:41:48.898533] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:23.313 [2024-04-24 19:41:48.898540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:23.313 [2024-04-24 19:41:48.898547] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:23.313 [2024-04-24 19:41:48.898554] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:23.313 [2024-04-24 19:41:48.898561] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:23.313 [2024-04-24 19:41:48.898567] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:23.313 [2024-04-24 19:41:48.898574] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:23.313 [2024-04-24 19:41:48.898581] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:23.313 [2024-04-24 19:41:48.898587] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:23.313 [2024-04-24 19:41:48.898594] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:23.313 [2024-04-24 19:41:48.898601] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:23.313 [2024-04-24 19:41:48.898608] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:23.313 [2024-04-24 19:41:48.898615] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:23.313 [2024-04-24 19:41:48.898622] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:23.313 [2024-04-24 19:41:48.898631] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:23.313 [2024-04-24 19:41:48.898731] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:23.313 [2024-04-24 19:41:48.898765] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:23.313 [2024-04-24 19:41:48.898812] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:23.313 [2024-04-24 19:41:48.898841] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:23.313 [2024-04-24 19:41:48.898886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.313 [2024-04-24 19:41:48.898921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:23.313 [2024-04-24 19:41:48.898955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.740 ms 00:27:23.313 [2024-04-24 19:41:48.898975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.313 [2024-04-24 19:41:48.921415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.313 [2024-04-24 19:41:48.921484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:23.313 [2024-04-24 19:41:48.921515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.405 ms 00:27:23.313 [2024-04-24 19:41:48.921534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.313 [2024-04-24 19:41:48.921581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.313 [2024-04-24 19:41:48.921600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:23.313 [2024-04-24 19:41:48.921619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:23.313 [2024-04-24 19:41:48.921649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.313 [2024-04-24 19:41:48.971345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.313 [2024-04-24 19:41:48.971468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:23.313 [2024-04-24 19:41:48.971498] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 49.733 ms 00:27:23.313 [2024-04-24 19:41:48.971527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.313 [2024-04-24 19:41:48.971587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.313 [2024-04-24 19:41:48.971608] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:23.313 [2024-04-24 19:41:48.971652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:23.313 [2024-04-24 19:41:48.971674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.313 [2024-04-24 19:41:48.972133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.313 [2024-04-24 19:41:48.972181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:23.313 [2024-04-24 19:41:48.972212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.400 ms 00:27:23.313 [2024-04-24 19:41:48.972232] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.313 [2024-04-24 19:41:48.972337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.313 [2024-04-24 19:41:48.972376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:23.313 [2024-04-24 19:41:48.972406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:23.313 [2024-04-24 19:41:48.972434] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.573 [2024-04-24 19:41:48.995104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.573 [2024-04-24 19:41:48.995194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:23.573 [2024-04-24 19:41:48.995237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.666 ms 00:27:23.573 [2024-04-24 19:41:48.995278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.573 [2024-04-24 19:41:49.014516] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:23.573 [2024-04-24 19:41:49.014604] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:23.573 [2024-04-24 19:41:49.014617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.573 [2024-04-24 19:41:49.014642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:23.573 [2024-04-24 19:41:49.014663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.244 ms 00:27:23.573 [2024-04-24 19:41:49.014670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.573 [2024-04-24 19:41:49.035109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.573 [2024-04-24 19:41:49.035155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:23.573 [2024-04-24 19:41:49.035167] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.427 ms 00:27:23.573 [2024-04-24 19:41:49.035174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.573 [2024-04-24 19:41:49.054359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.573 [2024-04-24 19:41:49.054411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:23.573 [2024-04-24 19:41:49.054423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.162 ms 00:27:23.573 [2024-04-24 19:41:49.054431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.573 [2024-04-24 19:41:49.073421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.573 [2024-04-24 19:41:49.073471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:23.573 [2024-04-24 19:41:49.073482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.964 ms 00:27:23.573 [2024-04-24 19:41:49.073489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.573 [2024-04-24 19:41:49.074048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.573 [2024-04-24 19:41:49.074072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:23.573 [2024-04-24 19:41:49.074085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.437 ms 00:27:23.573 [2024-04-24 19:41:49.074092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.573 [2024-04-24 19:41:49.160385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.573 [2024-04-24 19:41:49.160447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:23.573 [2024-04-24 19:41:49.160481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 86.435 ms 00:27:23.573 [2024-04-24 19:41:49.160489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.573 [2024-04-24 19:41:49.172447] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:23.573 [2024-04-24 19:41:49.173434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.573 [2024-04-24 19:41:49.173463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:23.573 [2024-04-24 19:41:49.173474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.895 ms 00:27:23.573 [2024-04-24 19:41:49.173481] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.573 [2024-04-24 19:41:49.173568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.573 [2024-04-24 19:41:49.173579] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:23.573 [2024-04-24 19:41:49.173590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:23.573 [2024-04-24 19:41:49.173598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.573 [2024-04-24 19:41:49.173664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.573 [2024-04-24 19:41:49.173674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:23.573 [2024-04-24 19:41:49.173681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:27:23.573 [2024-04-24 19:41:49.173688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.573 [2024-04-24 19:41:49.175326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.573 [2024-04-24 19:41:49.175354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:23.573 [2024-04-24 19:41:49.175363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.623 ms 00:27:23.573 [2024-04-24 19:41:49.175369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.573 [2024-04-24 19:41:49.175404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.573 [2024-04-24 19:41:49.175412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:23.573 [2024-04-24 19:41:49.175420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:23.573 [2024-04-24 19:41:49.175426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.573 [2024-04-24 19:41:49.175458] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:23.573 [2024-04-24 19:41:49.175468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.574 [2024-04-24 19:41:49.175475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:23.574 [2024-04-24 19:41:49.175482] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:23.574 [2024-04-24 19:41:49.175489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.574 [2024-04-24 19:41:49.211401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.574 [2024-04-24 19:41:49.211438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:23.574 [2024-04-24 19:41:49.211449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 35.960 ms 00:27:23.574 [2024-04-24 19:41:49.211456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.574 [2024-04-24 19:41:49.211528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.574 [2024-04-24 19:41:49.211537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:23.574 [2024-04-24 19:41:49.211545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:23.574 [2024-04-24 19:41:49.211558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.574 [2024-04-24 19:41:49.212697] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 352.615 ms, result 0 00:27:23.574 [2024-04-24 19:41:49.227705] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.574 [2024-04-24 19:41:49.243684] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:23.833 [2024-04-24 19:41:49.253653] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:24.400 19:41:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:24.400 19:41:49 -- common/autotest_common.sh@850 -- # return 0 00:27:24.400 19:41:49 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:24.401 19:41:49 -- ftl/common.sh@95 -- # return 0 00:27:24.401 19:41:49 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:24.401 [2024-04-24 19:41:50.064834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.401 [2024-04-24 19:41:50.064897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:24.401 [2024-04-24 19:41:50.064911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:24.401 [2024-04-24 19:41:50.064919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.401 [2024-04-24 19:41:50.064945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.401 [2024-04-24 19:41:50.064952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:24.401 [2024-04-24 19:41:50.064964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:24.401 [2024-04-24 19:41:50.064970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.401 [2024-04-24 19:41:50.064988] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:24.401 [2024-04-24 19:41:50.064996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:24.401 [2024-04-24 19:41:50.065003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:24.401 [2024-04-24 19:41:50.065009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:24.401 [2024-04-24 19:41:50.065065] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.232 ms, result 0 00:27:24.401 true 00:27:24.661 19:41:50 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:24.661 { 00:27:24.661 "name": "ftl", 00:27:24.661 "properties": [ 00:27:24.661 { 00:27:24.661 "name": "superblock_version", 00:27:24.661 "value": 5, 00:27:24.661 "read-only": true 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "name": "base_device", 00:27:24.661 "bands": [ 00:27:24.661 { 00:27:24.661 "id": 0, 00:27:24.661 "state": "CLOSED", 00:27:24.661 "validity": 1.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 1, 00:27:24.661 "state": "CLOSED", 00:27:24.661 "validity": 1.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 2, 00:27:24.661 "state": "CLOSED", 00:27:24.661 "validity": 0.007843137254901933 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 3, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 4, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 5, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 6, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 7, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 8, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 9, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 10, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 11, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 12, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 13, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 14, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 15, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 16, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 17, 00:27:24.661 "state": "FREE", 00:27:24.661 "validity": 0.0 00:27:24.661 } 00:27:24.661 ], 00:27:24.661 "read-only": true 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "name": "cache_device", 00:27:24.661 "type": "bdev", 00:27:24.661 "chunks": [ 00:27:24.661 { 00:27:24.661 "id": 0, 00:27:24.661 "state": "OPEN", 00:27:24.661 "utilization": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 1, 00:27:24.661 "state": "OPEN", 00:27:24.661 "utilization": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 2, 00:27:24.661 "state": "FREE", 00:27:24.661 "utilization": 0.0 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "id": 3, 00:27:24.661 "state": "FREE", 00:27:24.661 "utilization": 0.0 00:27:24.661 } 00:27:24.661 ], 00:27:24.661 "read-only": true 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "name": "verbose_mode", 00:27:24.661 "value": true, 00:27:24.661 "unit": "", 00:27:24.661 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:24.661 }, 00:27:24.661 { 00:27:24.661 "name": "prep_upgrade_on_shutdown", 00:27:24.661 "value": false, 00:27:24.661 "unit": "", 00:27:24.661 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:24.661 } 00:27:24.661 ] 00:27:24.661 } 00:27:24.661 19:41:50 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:24.661 19:41:50 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:24.661 19:41:50 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:24.921 19:41:50 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:24.921 19:41:50 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:24.921 19:41:50 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:24.921 19:41:50 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:24.921 19:41:50 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:25.180 Validate MD5 checksum, iteration 1 00:27:25.180 19:41:50 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:25.180 19:41:50 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:25.180 19:41:50 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:25.180 19:41:50 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:25.180 19:41:50 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:25.180 19:41:50 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:25.180 19:41:50 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:25.180 19:41:50 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:25.180 19:41:50 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:25.180 19:41:50 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:25.180 19:41:50 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:25.180 19:41:50 -- ftl/common.sh@154 -- # return 0 00:27:25.180 19:41:50 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:25.180 [2024-04-24 19:41:50.743563] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:27:25.180 [2024-04-24 19:41:50.743703] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84523 ] 00:27:25.439 [2024-04-24 19:41:50.896369] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.697 [2024-04-24 19:41:51.145516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.972  Copying: 627/1024 [MB] (627 MBps) Copying: 1024/1024 [MB] (average 608 MBps) 00:27:29.972 00:27:29.972 19:41:55 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:29.972 19:41:55 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:31.343 19:41:57 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:31.343 Validate MD5 checksum, iteration 2 00:27:31.343 19:41:57 -- ftl/upgrade_shutdown.sh@103 -- # sum=038fd43b4d803b4adabd3177499e4c35 00:27:31.343 19:41:57 -- ftl/upgrade_shutdown.sh@105 -- # [[ 038fd43b4d803b4adabd3177499e4c35 != \0\3\8\f\d\4\3\b\4\d\8\0\3\b\4\a\d\a\b\d\3\1\7\7\4\9\9\e\4\c\3\5 ]] 00:27:31.343 19:41:57 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:31.343 19:41:57 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:31.343 19:41:57 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:31.343 19:41:57 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:31.343 19:41:57 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:31.343 19:41:57 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:31.343 19:41:57 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:31.343 19:41:57 -- ftl/common.sh@154 -- # return 0 00:27:31.343 19:41:57 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:31.600 [2024-04-24 19:41:57.095203] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:27:31.600 [2024-04-24 19:41:57.095418] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84591 ] 00:27:31.600 [2024-04-24 19:41:57.245094] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.857 [2024-04-24 19:41:57.489278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.859  Copying: 601/1024 [MB] (601 MBps) Copying: 1024/1024 [MB] (average 587 MBps) 00:27:36.859 00:27:36.859 19:42:02 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:36.859 19:42:02 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:38.241 19:42:03 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:38.241 19:42:03 -- ftl/upgrade_shutdown.sh@103 -- # sum=601ff13618ad0917b6b4183be980343b 00:27:38.241 19:42:03 -- ftl/upgrade_shutdown.sh@105 -- # [[ 601ff13618ad0917b6b4183be980343b != \6\0\1\f\f\1\3\6\1\8\a\d\0\9\1\7\b\6\b\4\1\8\3\b\e\9\8\0\3\4\3\b ]] 00:27:38.241 19:42:03 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:38.241 19:42:03 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:38.241 19:42:03 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:38.241 19:42:03 -- ftl/common.sh@137 -- # [[ -n 84468 ]] 00:27:38.241 19:42:03 -- ftl/common.sh@138 -- # kill -9 84468 00:27:38.241 19:42:03 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:38.241 19:42:03 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:38.241 19:42:03 -- ftl/common.sh@81 -- # local base_bdev= 00:27:38.241 19:42:03 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:38.241 19:42:03 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:38.241 19:42:03 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:38.241 19:42:03 -- ftl/common.sh@89 -- # spdk_tgt_pid=84664 00:27:38.241 19:42:03 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:38.241 19:42:03 -- ftl/common.sh@91 -- # waitforlisten 84664 00:27:38.241 19:42:03 -- common/autotest_common.sh@817 -- # '[' -z 84664 ']' 00:27:38.241 19:42:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.241 19:42:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:38.241 19:42:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.241 19:42:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:38.241 19:42:03 -- common/autotest_common.sh@10 -- # set +x 00:27:38.241 [2024-04-24 19:42:03.872724] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:27:38.241 [2024-04-24 19:42:03.872919] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84664 ] 00:27:38.501 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 816: 84468 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:38.501 [2024-04-24 19:42:04.024320] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.760 [2024-04-24 19:42:04.259192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.699 [2024-04-24 19:42:05.242291] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:39.699 [2024-04-24 19:42:05.242356] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:39.960 [2024-04-24 19:42:05.385143] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.960 [2024-04-24 19:42:05.385203] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:39.960 [2024-04-24 19:42:05.385216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:39.960 [2024-04-24 19:42:05.385224] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-04-24 19:42:05.385282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.960 [2024-04-24 19:42:05.385291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:39.960 [2024-04-24 19:42:05.385300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:39.960 [2024-04-24 19:42:05.385306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-04-24 19:42:05.385334] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:39.960 [2024-04-24 19:42:05.386520] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:39.960 [2024-04-24 19:42:05.386549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.960 [2024-04-24 19:42:05.386558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:39.960 [2024-04-24 19:42:05.386569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.230 ms 00:27:39.960 [2024-04-24 19:42:05.386576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-04-24 19:42:05.386879] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:39.960 [2024-04-24 19:42:05.412954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.960 [2024-04-24 19:42:05.413024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:39.960 [2024-04-24 19:42:05.413050] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.124 ms 00:27:39.960 [2024-04-24 19:42:05.413061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-04-24 19:42:05.429497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.960 [2024-04-24 19:42:05.429535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:39.960 [2024-04-24 19:42:05.429545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:39.960 [2024-04-24 19:42:05.429553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.960 [2024-04-24 19:42:05.429918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.960 [2024-04-24 19:42:05.429938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:39.960 [2024-04-24 19:42:05.429947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.290 ms 00:27:39.961 [2024-04-24 19:42:05.429954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.961 [2024-04-24 19:42:05.429992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.961 [2024-04-24 19:42:05.430003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:39.961 [2024-04-24 19:42:05.430011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:39.961 [2024-04-24 19:42:05.430019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.961 [2024-04-24 19:42:05.430051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.961 [2024-04-24 19:42:05.430069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:39.961 [2024-04-24 19:42:05.430080] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:39.961 [2024-04-24 19:42:05.430089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.961 [2024-04-24 19:42:05.430113] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:39.961 [2024-04-24 19:42:05.435773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.961 [2024-04-24 19:42:05.435806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:39.961 [2024-04-24 19:42:05.435821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.679 ms 00:27:39.961 [2024-04-24 19:42:05.435830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.961 [2024-04-24 19:42:05.435863] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.961 [2024-04-24 19:42:05.435873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:39.961 [2024-04-24 19:42:05.435883] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:39.961 [2024-04-24 19:42:05.435891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.961 [2024-04-24 19:42:05.435927] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:39.961 [2024-04-24 19:42:05.435949] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:39.961 [2024-04-24 19:42:05.435996] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:39.961 [2024-04-24 19:42:05.436017] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:39.961 [2024-04-24 19:42:05.436090] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:39.961 [2024-04-24 19:42:05.436112] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:39.961 [2024-04-24 19:42:05.436124] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:39.961 [2024-04-24 19:42:05.436135] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:39.961 [2024-04-24 19:42:05.436145] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:39.961 [2024-04-24 19:42:05.436158] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:39.961 [2024-04-24 19:42:05.436166] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:39.961 [2024-04-24 19:42:05.436175] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:39.961 [2024-04-24 19:42:05.436183] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:39.961 [2024-04-24 19:42:05.436193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.961 [2024-04-24 19:42:05.436202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:39.961 [2024-04-24 19:42:05.436211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:27:39.961 [2024-04-24 19:42:05.436220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.961 [2024-04-24 19:42:05.436284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.961 [2024-04-24 19:42:05.436294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:39.961 [2024-04-24 19:42:05.436302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:27:39.961 [2024-04-24 19:42:05.436313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.961 [2024-04-24 19:42:05.436390] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:39.961 [2024-04-24 19:42:05.436402] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:39.961 [2024-04-24 19:42:05.436411] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:39.961 [2024-04-24 19:42:05.436420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.961 [2024-04-24 19:42:05.436429] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:39.961 [2024-04-24 19:42:05.436436] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:39.961 [2024-04-24 19:42:05.436444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:39.961 [2024-04-24 19:42:05.436453] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:39.961 [2024-04-24 19:42:05.436461] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:39.961 [2024-04-24 19:42:05.436469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.961 [2024-04-24 19:42:05.436477] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:39.961 [2024-04-24 19:42:05.436484] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:39.961 [2024-04-24 19:42:05.436492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.961 [2024-04-24 19:42:05.436500] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:39.961 [2024-04-24 19:42:05.436513] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:39.961 [2024-04-24 19:42:05.436531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.961 [2024-04-24 19:42:05.436537] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:39.961 [2024-04-24 19:42:05.436544] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:39.961 [2024-04-24 19:42:05.436552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.961 [2024-04-24 19:42:05.436560] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:39.961 [2024-04-24 19:42:05.436566] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:39.961 [2024-04-24 19:42:05.436573] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:39.961 [2024-04-24 19:42:05.436579] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:39.961 [2024-04-24 19:42:05.436585] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:39.961 [2024-04-24 19:42:05.436592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:39.961 [2024-04-24 19:42:05.436599] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:39.961 [2024-04-24 19:42:05.436606] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:39.961 [2024-04-24 19:42:05.436612] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:39.961 [2024-04-24 19:42:05.436618] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:39.961 [2024-04-24 19:42:05.436625] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:39.961 [2024-04-24 19:42:05.436631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:39.961 [2024-04-24 19:42:05.436637] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:39.961 [2024-04-24 19:42:05.436764] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:39.961 [2024-04-24 19:42:05.436792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:39.961 [2024-04-24 19:42:05.436814] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:39.961 [2024-04-24 19:42:05.436844] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:39.961 [2024-04-24 19:42:05.436864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.961 [2024-04-24 19:42:05.436888] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:39.961 [2024-04-24 19:42:05.436923] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:39.961 [2024-04-24 19:42:05.436946] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.961 [2024-04-24 19:42:05.436970] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:39.961 [2024-04-24 19:42:05.437010] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:39.961 [2024-04-24 19:42:05.437038] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:39.961 [2024-04-24 19:42:05.437068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:39.961 [2024-04-24 19:42:05.437106] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:39.961 [2024-04-24 19:42:05.437139] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:39.961 [2024-04-24 19:42:05.437179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:39.961 [2024-04-24 19:42:05.437233] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:39.961 [2024-04-24 19:42:05.437263] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:39.961 [2024-04-24 19:42:05.437305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:39.961 [2024-04-24 19:42:05.437337] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:39.961 [2024-04-24 19:42:05.437383] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:39.961 [2024-04-24 19:42:05.437433] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:39.961 [2024-04-24 19:42:05.437487] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:39.961 [2024-04-24 19:42:05.437523] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:39.961 [2024-04-24 19:42:05.437553] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:39.961 [2024-04-24 19:42:05.437561] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:39.961 [2024-04-24 19:42:05.437568] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:39.962 [2024-04-24 19:42:05.437575] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:39.962 [2024-04-24 19:42:05.437582] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:39.962 [2024-04-24 19:42:05.437589] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:39.962 [2024-04-24 19:42:05.437596] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:39.962 [2024-04-24 19:42:05.437603] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:39.962 [2024-04-24 19:42:05.437609] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:39.962 [2024-04-24 19:42:05.437616] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:39.962 [2024-04-24 19:42:05.437624] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:39.962 [2024-04-24 19:42:05.437640] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:39.962 [2024-04-24 19:42:05.437649] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:39.962 [2024-04-24 19:42:05.437656] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:39.962 [2024-04-24 19:42:05.437663] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:39.962 [2024-04-24 19:42:05.437670] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:39.962 [2024-04-24 19:42:05.437678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.962 [2024-04-24 19:42:05.437686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:39.962 [2024-04-24 19:42:05.437693] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.331 ms 00:27:39.962 [2024-04-24 19:42:05.437700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.962 [2024-04-24 19:42:05.462551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.962 [2024-04-24 19:42:05.462584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:39.962 [2024-04-24 19:42:05.462595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.842 ms 00:27:39.962 [2024-04-24 19:42:05.462603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.962 [2024-04-24 19:42:05.462655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.962 [2024-04-24 19:42:05.462664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:39.962 [2024-04-24 19:42:05.462675] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:27:39.962 [2024-04-24 19:42:05.462682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.962 [2024-04-24 19:42:05.524209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.962 [2024-04-24 19:42:05.524259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:39.962 [2024-04-24 19:42:05.524273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 61.576 ms 00:27:39.962 [2024-04-24 19:42:05.524283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.962 [2024-04-24 19:42:05.524348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.962 [2024-04-24 19:42:05.524357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:39.962 [2024-04-24 19:42:05.524367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:39.962 [2024-04-24 19:42:05.524375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.962 [2024-04-24 19:42:05.524516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.962 [2024-04-24 19:42:05.524526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:39.962 [2024-04-24 19:42:05.524534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.075 ms 00:27:39.962 [2024-04-24 19:42:05.524542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.962 [2024-04-24 19:42:05.524576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.962 [2024-04-24 19:42:05.524585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:39.962 [2024-04-24 19:42:05.524593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:39.962 [2024-04-24 19:42:05.524601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.962 [2024-04-24 19:42:05.551254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.962 [2024-04-24 19:42:05.551323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:39.962 [2024-04-24 19:42:05.551339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.680 ms 00:27:39.962 [2024-04-24 19:42:05.551348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.962 [2024-04-24 19:42:05.551529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.962 [2024-04-24 19:42:05.551547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:39.962 [2024-04-24 19:42:05.551558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:39.962 [2024-04-24 19:42:05.551567] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.962 [2024-04-24 19:42:05.582117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.962 [2024-04-24 19:42:05.582169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:39.962 [2024-04-24 19:42:05.582184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.580 ms 00:27:39.962 [2024-04-24 19:42:05.582193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:39.962 [2024-04-24 19:42:05.600178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:39.962 [2024-04-24 19:42:05.600224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:39.962 [2024-04-24 19:42:05.600237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.441 ms 00:27:39.962 [2024-04-24 19:42:05.600250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.221 [2024-04-24 19:42:05.705496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.221 [2024-04-24 19:42:05.705555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:40.221 [2024-04-24 19:42:05.705576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 105.374 ms 00:27:40.221 [2024-04-24 19:42:05.705584] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.221 [2024-04-24 19:42:05.705712] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:40.221 [2024-04-24 19:42:05.705771] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:40.221 [2024-04-24 19:42:05.705811] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:40.221 [2024-04-24 19:42:05.705850] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:40.221 [2024-04-24 19:42:05.705868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.221 [2024-04-24 19:42:05.705876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:40.221 [2024-04-24 19:42:05.705884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:27:40.221 [2024-04-24 19:42:05.705891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.221 [2024-04-24 19:42:05.705969] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:40.221 [2024-04-24 19:42:05.705980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.221 [2024-04-24 19:42:05.705987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:40.221 [2024-04-24 19:42:05.705999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:40.221 [2024-04-24 19:42:05.706006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.221 [2024-04-24 19:42:05.734689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.221 [2024-04-24 19:42:05.734735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:40.221 [2024-04-24 19:42:05.734752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 28.715 ms 00:27:40.221 [2024-04-24 19:42:05.734760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.221 [2024-04-24 19:42:05.752253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.221 [2024-04-24 19:42:05.752297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:40.221 [2024-04-24 19:42:05.752310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:40.221 [2024-04-24 19:42:05.752319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.221 [2024-04-24 19:42:05.752381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:40.221 [2024-04-24 19:42:05.752392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:27:40.221 [2024-04-24 19:42:05.752402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:40.221 [2024-04-24 19:42:05.752414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:40.221 [2024-04-24 19:42:05.752615] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:27:40.790 [2024-04-24 19:42:06.257904] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:27:40.790 [2024-04-24 19:42:06.258096] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:27:41.358 [2024-04-24 19:42:06.771352] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:27:41.359 [2024-04-24 19:42:06.771456] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:41.359 [2024-04-24 19:42:06.771472] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:41.359 [2024-04-24 19:42:06.771484] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.359 [2024-04-24 19:42:06.771495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:41.359 [2024-04-24 19:42:06.771509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1021.014 ms 00:27:41.359 [2024-04-24 19:42:06.771519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.359 [2024-04-24 19:42:06.771555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.359 [2024-04-24 19:42:06.771566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:41.359 [2024-04-24 19:42:06.771576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:41.359 [2024-04-24 19:42:06.771585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.359 [2024-04-24 19:42:06.787354] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:41.359 [2024-04-24 19:42:06.787537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.359 [2024-04-24 19:42:06.787554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:41.359 [2024-04-24 19:42:06.787570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.954 ms 00:27:41.359 [2024-04-24 19:42:06.787582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.359 [2024-04-24 19:42:06.788326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.359 [2024-04-24 19:42:06.788362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:27:41.359 [2024-04-24 19:42:06.788374] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.618 ms 00:27:41.359 [2024-04-24 19:42:06.788383] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.359 [2024-04-24 19:42:06.790799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.359 [2024-04-24 19:42:06.790826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:41.359 [2024-04-24 19:42:06.790836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.394 ms 00:27:41.359 [2024-04-24 19:42:06.790844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.359 [2024-04-24 19:42:06.836209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.359 [2024-04-24 19:42:06.836259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:27:41.359 [2024-04-24 19:42:06.836275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 45.426 ms 00:27:41.359 [2024-04-24 19:42:06.836285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.359 [2024-04-24 19:42:06.836430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.359 [2024-04-24 19:42:06.836448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:41.359 [2024-04-24 19:42:06.836458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:41.359 [2024-04-24 19:42:06.836466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.359 [2024-04-24 19:42:06.838380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.359 [2024-04-24 19:42:06.838414] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:41.359 [2024-04-24 19:42:06.838425] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.896 ms 00:27:41.359 [2024-04-24 19:42:06.838435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.359 [2024-04-24 19:42:06.838473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.359 [2024-04-24 19:42:06.838483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:41.359 [2024-04-24 19:42:06.838494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:41.359 [2024-04-24 19:42:06.838503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.359 [2024-04-24 19:42:06.838538] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:41.359 [2024-04-24 19:42:06.838550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.359 [2024-04-24 19:42:06.838558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:41.359 [2024-04-24 19:42:06.838568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:41.359 [2024-04-24 19:42:06.838576] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.359 [2024-04-24 19:42:06.838662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.359 [2024-04-24 19:42:06.838687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:41.359 [2024-04-24 19:42:06.838696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:27:41.359 [2024-04-24 19:42:06.838715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.359 [2024-04-24 19:42:06.840064] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1457.128 ms, result 0 00:27:41.359 [2024-04-24 19:42:06.852593] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.359 [2024-04-24 19:42:06.868573] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:41.359 [2024-04-24 19:42:06.880008] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:41.359 19:42:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:41.359 19:42:06 -- common/autotest_common.sh@850 -- # return 0 00:27:41.359 19:42:06 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:41.359 19:42:06 -- ftl/common.sh@95 -- # return 0 00:27:41.359 19:42:06 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:41.359 19:42:06 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:41.359 19:42:06 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:41.359 19:42:06 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:41.359 19:42:06 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:41.359 Validate MD5 checksum, iteration 1 00:27:41.359 19:42:06 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:41.359 19:42:06 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:41.359 19:42:06 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:41.359 19:42:06 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:41.359 19:42:06 -- ftl/common.sh@154 -- # return 0 00:27:41.359 19:42:06 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:41.359 [2024-04-24 19:42:07.007062] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:27:41.359 [2024-04-24 19:42:07.007179] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84703 ] 00:27:41.619 [2024-04-24 19:42:07.172317] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.878 [2024-04-24 19:42:07.450337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.657  Copying: 551/1024 [MB] (551 MBps) Copying: 1024/1024 [MB] (average 549 MBps) 00:27:46.657 00:27:46.657 19:42:11 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:46.657 19:42:11 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:48.032 19:42:13 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:48.032 Validate MD5 checksum, iteration 2 00:27:48.032 19:42:13 -- ftl/upgrade_shutdown.sh@103 -- # sum=038fd43b4d803b4adabd3177499e4c35 00:27:48.032 19:42:13 -- ftl/upgrade_shutdown.sh@105 -- # [[ 038fd43b4d803b4adabd3177499e4c35 != \0\3\8\f\d\4\3\b\4\d\8\0\3\b\4\a\d\a\b\d\3\1\7\7\4\9\9\e\4\c\3\5 ]] 00:27:48.032 19:42:13 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:48.032 19:42:13 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:48.032 19:42:13 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:48.032 19:42:13 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:48.032 19:42:13 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:48.032 19:42:13 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:48.032 19:42:13 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:48.032 19:42:13 -- ftl/common.sh@154 -- # return 0 00:27:48.032 19:42:13 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:48.032 [2024-04-24 19:42:13.673399] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:27:48.033 [2024-04-24 19:42:13.673609] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84771 ] 00:27:48.291 [2024-04-24 19:42:13.837141] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.549 [2024-04-24 19:42:14.078316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.618  Copying: 626/1024 [MB] (626 MBps) Copying: 1024/1024 [MB] (average 619 MBps) 00:27:52.618 00:27:52.618 19:42:18 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:52.618 19:42:18 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:54.525 19:42:19 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:54.525 19:42:19 -- ftl/upgrade_shutdown.sh@103 -- # sum=601ff13618ad0917b6b4183be980343b 00:27:54.525 19:42:19 -- ftl/upgrade_shutdown.sh@105 -- # [[ 601ff13618ad0917b6b4183be980343b != \6\0\1\f\f\1\3\6\1\8\a\d\0\9\1\7\b\6\b\4\1\8\3\b\e\9\8\0\3\4\3\b ]] 00:27:54.525 19:42:19 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:54.525 19:42:19 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:54.525 19:42:19 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:27:54.525 19:42:19 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:27:54.525 19:42:19 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:27:54.525 19:42:19 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:54.525 19:42:19 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:27:54.525 19:42:19 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:27:54.525 19:42:19 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:27:54.525 19:42:19 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:27:54.525 19:42:19 -- ftl/common.sh@130 -- # [[ -n 84664 ]] 00:27:54.525 19:42:19 -- ftl/common.sh@131 -- # killprocess 84664 00:27:54.525 19:42:19 -- common/autotest_common.sh@936 -- # '[' -z 84664 ']' 00:27:54.525 19:42:19 -- common/autotest_common.sh@940 -- # kill -0 84664 00:27:54.525 19:42:19 -- common/autotest_common.sh@941 -- # uname 00:27:54.525 19:42:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:54.525 19:42:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84664 00:27:54.525 killing process with pid 84664 00:27:54.525 19:42:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:54.525 19:42:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:54.525 19:42:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84664' 00:27:54.525 19:42:19 -- common/autotest_common.sh@955 -- # kill 84664 00:27:54.525 19:42:19 -- common/autotest_common.sh@960 -- # wait 84664 00:27:55.481 [2024-04-24 19:42:21.035373] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:55.481 [2024-04-24 19:42:21.054010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-04-24 19:42:21.054055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:55.481 [2024-04-24 19:42:21.054068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:55.481 [2024-04-24 19:42:21.054076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-04-24 19:42:21.054095] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:55.481 [2024-04-24 19:42:21.057267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-04-24 19:42:21.057297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:55.481 [2024-04-24 19:42:21.057310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.166 ms 00:27:55.481 [2024-04-24 19:42:21.057317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-04-24 19:42:21.057504] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-04-24 19:42:21.057515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:55.481 [2024-04-24 19:42:21.057523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.166 ms 00:27:55.481 [2024-04-24 19:42:21.057530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-04-24 19:42:21.058684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-04-24 19:42:21.058712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:55.481 [2024-04-24 19:42:21.058721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.144 ms 00:27:55.481 [2024-04-24 19:42:21.058732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-04-24 19:42:21.059669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-04-24 19:42:21.059695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:55.481 [2024-04-24 19:42:21.059705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.912 ms 00:27:55.481 [2024-04-24 19:42:21.059713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-04-24 19:42:21.075241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-04-24 19:42:21.075281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:55.481 [2024-04-24 19:42:21.075293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.527 ms 00:27:55.481 [2024-04-24 19:42:21.075305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-04-24 19:42:21.083519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-04-24 19:42:21.083551] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:55.481 [2024-04-24 19:42:21.083562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.197 ms 00:27:55.481 [2024-04-24 19:42:21.083569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-04-24 19:42:21.083644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-04-24 19:42:21.083655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:55.481 [2024-04-24 19:42:21.083663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:27:55.481 [2024-04-24 19:42:21.083674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.481 [2024-04-24 19:42:21.098383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.481 [2024-04-24 19:42:21.098411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:55.482 [2024-04-24 19:42:21.098421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.722 ms 00:27:55.482 [2024-04-24 19:42:21.098428] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.482 [2024-04-24 19:42:21.113465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.482 [2024-04-24 19:42:21.113491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:55.482 [2024-04-24 19:42:21.113501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.038 ms 00:27:55.482 [2024-04-24 19:42:21.113508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.482 [2024-04-24 19:42:21.128758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.482 [2024-04-24 19:42:21.128789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:55.482 [2024-04-24 19:42:21.128812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.249 ms 00:27:55.482 [2024-04-24 19:42:21.128819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.482 [2024-04-24 19:42:21.143764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.482 [2024-04-24 19:42:21.143796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:55.482 [2024-04-24 19:42:21.143806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.920 ms 00:27:55.482 [2024-04-24 19:42:21.143812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.482 [2024-04-24 19:42:21.143841] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:55.482 [2024-04-24 19:42:21.143854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:55.482 [2024-04-24 19:42:21.143864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:55.482 [2024-04-24 19:42:21.143873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:55.482 [2024-04-24 19:42:21.143896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.143904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.143911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.143919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.143927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.143934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.143941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.143948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.143955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.143962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.143969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.143976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.143983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.143990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.143997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:55.482 [2024-04-24 19:42:21.144006] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:55.482 [2024-04-24 19:42:21.144013] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: a6187774-285d-4b0c-bb0d-fc5b9b4e1d3e 00:27:55.482 [2024-04-24 19:42:21.144021] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:55.482 [2024-04-24 19:42:21.144028] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:27:55.482 [2024-04-24 19:42:21.144035] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:27:55.482 [2024-04-24 19:42:21.144043] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:27:55.482 [2024-04-24 19:42:21.144049] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:55.482 [2024-04-24 19:42:21.144056] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:55.482 [2024-04-24 19:42:21.144066] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:55.482 [2024-04-24 19:42:21.144072] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:55.482 [2024-04-24 19:42:21.144079] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:55.482 [2024-04-24 19:42:21.144088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.482 [2024-04-24 19:42:21.144095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:55.482 [2024-04-24 19:42:21.144103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.248 ms 00:27:55.482 [2024-04-24 19:42:21.144110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.163113] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.742 [2024-04-24 19:42:21.163154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:55.742 [2024-04-24 19:42:21.163165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.021 ms 00:27:55.742 [2024-04-24 19:42:21.163172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.163438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:55.742 [2024-04-24 19:42:21.163448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:55.742 [2024-04-24 19:42:21.163457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.215 ms 00:27:55.742 [2024-04-24 19:42:21.163464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.232096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.742 [2024-04-24 19:42:21.232146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:55.742 [2024-04-24 19:42:21.232159] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.742 [2024-04-24 19:42:21.232166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.232222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.742 [2024-04-24 19:42:21.232231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:55.742 [2024-04-24 19:42:21.232239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.742 [2024-04-24 19:42:21.232246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.232331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.742 [2024-04-24 19:42:21.232344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:55.742 [2024-04-24 19:42:21.232351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.742 [2024-04-24 19:42:21.232359] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.232380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.742 [2024-04-24 19:42:21.232388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:55.742 [2024-04-24 19:42:21.232396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.742 [2024-04-24 19:42:21.232402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.352933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.742 [2024-04-24 19:42:21.352989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:55.742 [2024-04-24 19:42:21.353000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.742 [2024-04-24 19:42:21.353008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.399461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.742 [2024-04-24 19:42:21.399515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:55.742 [2024-04-24 19:42:21.399527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.742 [2024-04-24 19:42:21.399535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.399615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.742 [2024-04-24 19:42:21.399624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:55.742 [2024-04-24 19:42:21.399648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.742 [2024-04-24 19:42:21.399656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.399694] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.742 [2024-04-24 19:42:21.399711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:55.742 [2024-04-24 19:42:21.399718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.742 [2024-04-24 19:42:21.399725] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.399839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.742 [2024-04-24 19:42:21.399852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:55.742 [2024-04-24 19:42:21.399861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.742 [2024-04-24 19:42:21.399869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.399902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.742 [2024-04-24 19:42:21.399911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:55.742 [2024-04-24 19:42:21.399923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.742 [2024-04-24 19:42:21.399930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.399966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.742 [2024-04-24 19:42:21.399975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:55.742 [2024-04-24 19:42:21.399982] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.742 [2024-04-24 19:42:21.399989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.400035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:55.742 [2024-04-24 19:42:21.400046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:55.742 [2024-04-24 19:42:21.400055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:55.742 [2024-04-24 19:42:21.400062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:55.742 [2024-04-24 19:42:21.400179] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 346.805 ms, result 0 00:27:57.135 19:42:22 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:57.135 19:42:22 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:57.135 19:42:22 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:27:57.135 19:42:22 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:27:57.135 19:42:22 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:27:57.135 19:42:22 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:57.135 Remove shared memory files 00:27:57.135 19:42:22 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:27:57.135 19:42:22 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:57.135 19:42:22 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:57.135 19:42:22 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:57.135 19:42:22 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84468 00:27:57.135 19:42:22 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:57.135 19:42:22 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:57.135 ************************************ 00:27:57.135 END TEST ftl_upgrade_shutdown 00:27:57.135 ************************************ 00:27:57.135 00:27:57.135 real 1m32.535s 00:27:57.135 user 2m10.579s 00:27:57.135 sys 0m20.866s 00:27:57.135 19:42:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:57.135 19:42:22 -- common/autotest_common.sh@10 -- # set +x 00:27:57.394 Process with pid 77539 is not found 00:27:57.394 19:42:22 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:27:57.394 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:27:57.394 19:42:22 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:27:57.394 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:27:57.394 19:42:22 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:27:57.394 19:42:22 -- ftl/ftl.sh@14 -- # killprocess 77539 00:27:57.394 19:42:22 -- common/autotest_common.sh@936 -- # '[' -z 77539 ']' 00:27:57.394 19:42:22 -- common/autotest_common.sh@940 -- # kill -0 77539 00:27:57.394 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (77539) - No such process 00:27:57.394 19:42:22 -- common/autotest_common.sh@963 -- # echo 'Process with pid 77539 is not found' 00:27:57.394 19:42:22 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:27:57.394 19:42:22 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84897 00:27:57.394 19:42:22 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:57.394 19:42:22 -- ftl/ftl.sh@20 -- # waitforlisten 84897 00:27:57.394 19:42:22 -- common/autotest_common.sh@817 -- # '[' -z 84897 ']' 00:27:57.394 19:42:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.394 19:42:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:57.394 19:42:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.394 19:42:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:57.394 19:42:22 -- common/autotest_common.sh@10 -- # set +x 00:27:57.394 [2024-04-24 19:42:22.938361] Starting SPDK v24.05-pre git sha1 dd57ed3e8 / DPDK 23.11.0 initialization... 00:27:57.394 [2024-04-24 19:42:22.938620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84897 ] 00:27:57.653 [2024-04-24 19:42:23.108720] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.912 [2024-04-24 19:42:23.362195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.846 19:42:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:58.846 19:42:24 -- common/autotest_common.sh@850 -- # return 0 00:27:58.846 19:42:24 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:59.104 nvme0n1 00:27:59.104 19:42:24 -- ftl/ftl.sh@22 -- # clear_lvols 00:27:59.104 19:42:24 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:59.104 19:42:24 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:59.364 19:42:24 -- ftl/common.sh@28 -- # stores=c7acd7aa-9fab-4874-a389-35ab3808e732 00:27:59.364 19:42:24 -- ftl/common.sh@29 -- # for lvs in $stores 00:27:59.364 19:42:24 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c7acd7aa-9fab-4874-a389-35ab3808e732 00:27:59.623 19:42:25 -- ftl/ftl.sh@23 -- # killprocess 84897 00:27:59.623 19:42:25 -- common/autotest_common.sh@936 -- # '[' -z 84897 ']' 00:27:59.623 19:42:25 -- common/autotest_common.sh@940 -- # kill -0 84897 00:27:59.623 19:42:25 -- common/autotest_common.sh@941 -- # uname 00:27:59.623 19:42:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:59.623 19:42:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84897 00:27:59.623 killing process with pid 84897 00:27:59.623 19:42:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:59.623 19:42:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:59.623 19:42:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84897' 00:27:59.623 19:42:25 -- common/autotest_common.sh@955 -- # kill 84897 00:27:59.623 19:42:25 -- common/autotest_common.sh@960 -- # wait 84897 00:28:02.155 19:42:27 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:02.155 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:02.155 Waiting for block devices as requested 00:28:02.415 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:02.415 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:02.415 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:02.674 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:07.951 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:07.951 19:42:33 -- ftl/ftl.sh@28 -- # remove_shm 00:28:07.951 Remove shared memory files 00:28:07.951 19:42:33 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:07.951 19:42:33 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:07.951 19:42:33 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:07.951 19:42:33 -- ftl/common.sh@207 -- # rm -f rm -f 00:28:07.951 19:42:33 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:07.951 19:42:33 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:07.951 ************************************ 00:28:07.951 END TEST ftl 00:28:07.951 ************************************ 00:28:07.951 00:28:07.951 real 10m35.249s 00:28:07.951 user 13m25.914s 00:28:07.951 sys 1m16.657s 00:28:07.951 19:42:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:07.951 19:42:33 -- common/autotest_common.sh@10 -- # set +x 00:28:07.951 19:42:33 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:28:07.951 19:42:33 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:28:07.951 19:42:33 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:07.951 19:42:33 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:28:07.951 19:42:33 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:28:07.951 19:42:33 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:28:07.951 19:42:33 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:28:07.951 19:42:33 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:28:07.951 19:42:33 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:28:07.951 19:42:33 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:28:07.951 19:42:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:07.951 19:42:33 -- common/autotest_common.sh@10 -- # set +x 00:28:07.951 19:42:33 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:28:07.951 19:42:33 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:28:07.951 19:42:33 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:28:07.951 19:42:33 -- common/autotest_common.sh@10 -- # set +x 00:28:09.856 INFO: APP EXITING 00:28:09.856 INFO: killing all VMs 00:28:09.856 INFO: killing vhost app 00:28:09.856 INFO: EXIT DONE 00:28:09.856 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:10.423 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:10.423 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:10.423 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:10.423 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:10.988 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:11.247 Cleaning 00:28:11.247 Removing: /var/run/dpdk/spdk0/config 00:28:11.247 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:11.247 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:11.247 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:11.247 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:11.247 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:11.247 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:11.247 Removing: /var/run/dpdk/spdk0 00:28:11.247 Removing: /var/run/dpdk/spdk_pid61454 00:28:11.247 Removing: /var/run/dpdk/spdk_pid61709 00:28:11.247 Removing: /var/run/dpdk/spdk_pid61968 00:28:11.247 Removing: /var/run/dpdk/spdk_pid62083 00:28:11.247 Removing: /var/run/dpdk/spdk_pid62139 00:28:11.247 Removing: /var/run/dpdk/spdk_pid62286 00:28:11.247 Removing: /var/run/dpdk/spdk_pid62315 00:28:11.247 Removing: /var/run/dpdk/spdk_pid62516 00:28:11.247 Removing: /var/run/dpdk/spdk_pid62637 00:28:11.247 Removing: /var/run/dpdk/spdk_pid62746 00:28:11.247 Removing: /var/run/dpdk/spdk_pid62876 00:28:11.505 Removing: /var/run/dpdk/spdk_pid62995 00:28:11.505 Removing: /var/run/dpdk/spdk_pid63045 00:28:11.505 Removing: /var/run/dpdk/spdk_pid63091 00:28:11.505 Removing: /var/run/dpdk/spdk_pid63164 00:28:11.505 Removing: /var/run/dpdk/spdk_pid63301 00:28:11.505 Removing: /var/run/dpdk/spdk_pid63761 00:28:11.505 Removing: /var/run/dpdk/spdk_pid63845 00:28:11.505 Removing: /var/run/dpdk/spdk_pid63929 00:28:11.505 Removing: /var/run/dpdk/spdk_pid63950 00:28:11.505 Removing: /var/run/dpdk/spdk_pid64108 00:28:11.505 Removing: /var/run/dpdk/spdk_pid64130 00:28:11.505 Removing: /var/run/dpdk/spdk_pid64290 00:28:11.505 Removing: /var/run/dpdk/spdk_pid64312 00:28:11.505 Removing: /var/run/dpdk/spdk_pid64385 00:28:11.505 Removing: /var/run/dpdk/spdk_pid64409 00:28:11.505 Removing: /var/run/dpdk/spdk_pid64484 00:28:11.505 Removing: /var/run/dpdk/spdk_pid64506 00:28:11.505 Removing: /var/run/dpdk/spdk_pid64713 00:28:11.505 Removing: /var/run/dpdk/spdk_pid64756 00:28:11.505 Removing: /var/run/dpdk/spdk_pid64845 00:28:11.505 Removing: /var/run/dpdk/spdk_pid64940 00:28:11.505 Removing: /var/run/dpdk/spdk_pid64986 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65079 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65135 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65186 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65236 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65292 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65343 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65399 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65455 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65500 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65557 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65615 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65666 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65722 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65778 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65824 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65880 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65936 00:28:11.505 Removing: /var/run/dpdk/spdk_pid65994 00:28:11.505 Removing: /var/run/dpdk/spdk_pid66049 00:28:11.505 Removing: /var/run/dpdk/spdk_pid66105 00:28:11.505 Removing: /var/run/dpdk/spdk_pid66164 00:28:11.505 Removing: /var/run/dpdk/spdk_pid66252 00:28:11.505 Removing: /var/run/dpdk/spdk_pid66383 00:28:11.505 Removing: /var/run/dpdk/spdk_pid66566 00:28:11.505 Removing: /var/run/dpdk/spdk_pid66671 00:28:11.505 Removing: /var/run/dpdk/spdk_pid66728 00:28:11.505 Removing: /var/run/dpdk/spdk_pid67180 00:28:11.505 Removing: /var/run/dpdk/spdk_pid67289 00:28:11.505 Removing: /var/run/dpdk/spdk_pid67420 00:28:11.505 Removing: /var/run/dpdk/spdk_pid67490 00:28:11.505 Removing: /var/run/dpdk/spdk_pid67525 00:28:11.505 Removing: /var/run/dpdk/spdk_pid67606 00:28:11.505 Removing: /var/run/dpdk/spdk_pid68264 00:28:11.505 Removing: /var/run/dpdk/spdk_pid68311 00:28:11.505 Removing: /var/run/dpdk/spdk_pid68830 00:28:11.505 Removing: /var/run/dpdk/spdk_pid68944 00:28:11.505 Removing: /var/run/dpdk/spdk_pid69079 00:28:11.505 Removing: /var/run/dpdk/spdk_pid69142 00:28:11.505 Removing: /var/run/dpdk/spdk_pid69184 00:28:11.505 Removing: /var/run/dpdk/spdk_pid69221 00:28:11.505 Removing: /var/run/dpdk/spdk_pid71176 00:28:11.505 Removing: /var/run/dpdk/spdk_pid71334 00:28:11.505 Removing: /var/run/dpdk/spdk_pid71343 00:28:11.764 Removing: /var/run/dpdk/spdk_pid71361 00:28:11.764 Removing: /var/run/dpdk/spdk_pid71428 00:28:11.764 Removing: /var/run/dpdk/spdk_pid71432 00:28:11.764 Removing: /var/run/dpdk/spdk_pid71449 00:28:11.764 Removing: /var/run/dpdk/spdk_pid71516 00:28:11.764 Removing: /var/run/dpdk/spdk_pid71522 00:28:11.764 Removing: /var/run/dpdk/spdk_pid71543 00:28:11.764 Removing: /var/run/dpdk/spdk_pid71621 00:28:11.764 Removing: /var/run/dpdk/spdk_pid71625 00:28:11.764 Removing: /var/run/dpdk/spdk_pid71637 00:28:11.764 Removing: /var/run/dpdk/spdk_pid73098 00:28:11.764 Removing: /var/run/dpdk/spdk_pid73214 00:28:11.764 Removing: /var/run/dpdk/spdk_pid73378 00:28:11.764 Removing: /var/run/dpdk/spdk_pid73504 00:28:11.764 Removing: /var/run/dpdk/spdk_pid73630 00:28:11.764 Removing: /var/run/dpdk/spdk_pid73762 00:28:11.764 Removing: /var/run/dpdk/spdk_pid73915 00:28:11.764 Removing: /var/run/dpdk/spdk_pid74000 00:28:11.764 Removing: /var/run/dpdk/spdk_pid74156 00:28:11.764 Removing: /var/run/dpdk/spdk_pid74534 00:28:11.764 Removing: /var/run/dpdk/spdk_pid74586 00:28:11.764 Removing: /var/run/dpdk/spdk_pid75096 00:28:11.764 Removing: /var/run/dpdk/spdk_pid75295 00:28:11.764 Removing: /var/run/dpdk/spdk_pid75410 00:28:11.764 Removing: /var/run/dpdk/spdk_pid75535 00:28:11.764 Removing: /var/run/dpdk/spdk_pid75598 00:28:11.764 Removing: /var/run/dpdk/spdk_pid75633 00:28:11.764 Removing: /var/run/dpdk/spdk_pid76002 00:28:11.764 Removing: /var/run/dpdk/spdk_pid76068 00:28:11.764 Removing: /var/run/dpdk/spdk_pid76159 00:28:11.764 Removing: /var/run/dpdk/spdk_pid76590 00:28:11.764 Removing: /var/run/dpdk/spdk_pid76742 00:28:11.764 Removing: /var/run/dpdk/spdk_pid77539 00:28:11.764 Removing: /var/run/dpdk/spdk_pid77690 00:28:11.764 Removing: /var/run/dpdk/spdk_pid77950 00:28:11.764 Removing: /var/run/dpdk/spdk_pid78057 00:28:11.764 Removing: /var/run/dpdk/spdk_pid78389 00:28:11.764 Removing: /var/run/dpdk/spdk_pid78652 00:28:11.764 Removing: /var/run/dpdk/spdk_pid79106 00:28:11.764 Removing: /var/run/dpdk/spdk_pid79390 00:28:11.764 Removing: /var/run/dpdk/spdk_pid79516 00:28:11.764 Removing: /var/run/dpdk/spdk_pid79591 00:28:11.764 Removing: /var/run/dpdk/spdk_pid79728 00:28:11.764 Removing: /var/run/dpdk/spdk_pid79765 00:28:11.764 Removing: /var/run/dpdk/spdk_pid79846 00:28:11.764 Removing: /var/run/dpdk/spdk_pid80036 00:28:11.764 Removing: /var/run/dpdk/spdk_pid80319 00:28:11.764 Removing: /var/run/dpdk/spdk_pid80669 00:28:11.764 Removing: /var/run/dpdk/spdk_pid81022 00:28:11.764 Removing: /var/run/dpdk/spdk_pid81396 00:28:11.764 Removing: /var/run/dpdk/spdk_pid81826 00:28:11.764 Removing: /var/run/dpdk/spdk_pid81985 00:28:11.764 Removing: /var/run/dpdk/spdk_pid82089 00:28:11.764 Removing: /var/run/dpdk/spdk_pid82590 00:28:11.764 Removing: /var/run/dpdk/spdk_pid82655 00:28:11.764 Removing: /var/run/dpdk/spdk_pid83047 00:28:11.764 Removing: /var/run/dpdk/spdk_pid83389 00:28:11.764 Removing: /var/run/dpdk/spdk_pid83814 00:28:11.764 Removing: /var/run/dpdk/spdk_pid83939 00:28:11.764 Removing: /var/run/dpdk/spdk_pid83999 00:28:11.764 Removing: /var/run/dpdk/spdk_pid84067 00:28:11.764 Removing: /var/run/dpdk/spdk_pid84138 00:28:11.764 Removing: /var/run/dpdk/spdk_pid84209 00:28:11.764 Removing: /var/run/dpdk/spdk_pid84468 00:28:11.764 Removing: /var/run/dpdk/spdk_pid84523 00:28:12.023 Removing: /var/run/dpdk/spdk_pid84591 00:28:12.023 Removing: /var/run/dpdk/spdk_pid84664 00:28:12.023 Removing: /var/run/dpdk/spdk_pid84703 00:28:12.023 Removing: /var/run/dpdk/spdk_pid84771 00:28:12.023 Removing: /var/run/dpdk/spdk_pid84897 00:28:12.023 Clean 00:28:12.023 19:42:37 -- common/autotest_common.sh@1437 -- # return 0 00:28:12.023 19:42:37 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:28:12.023 19:42:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:12.023 19:42:37 -- common/autotest_common.sh@10 -- # set +x 00:28:12.023 19:42:37 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:28:12.023 19:42:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:12.023 19:42:37 -- common/autotest_common.sh@10 -- # set +x 00:28:12.281 19:42:37 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:12.281 19:42:37 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:12.281 19:42:37 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:12.281 19:42:37 -- spdk/autotest.sh@389 -- # hash lcov 00:28:12.281 19:42:37 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:12.281 19:42:37 -- spdk/autotest.sh@391 -- # hostname 00:28:12.281 19:42:37 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:12.281 geninfo: WARNING: invalid characters removed from testname! 00:28:38.848 19:43:00 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:38.848 19:43:02 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:39.415 19:43:04 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:41.318 19:43:06 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:43.225 19:43:08 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:45.766 19:43:10 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:47.676 19:43:12 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:47.676 19:43:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:47.676 19:43:13 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:47.676 19:43:13 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:47.676 19:43:13 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:47.676 19:43:13 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.676 19:43:13 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.676 19:43:13 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.676 19:43:13 -- paths/export.sh@5 -- $ export PATH 00:28:47.676 19:43:13 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:47.676 19:43:13 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:47.676 19:43:13 -- common/autobuild_common.sh@435 -- $ date +%s 00:28:47.676 19:43:13 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713987793.XXXXXX 00:28:47.676 19:43:13 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713987793.j0nf0u 00:28:47.676 19:43:13 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:28:47.676 19:43:13 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:28:47.676 19:43:13 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:28:47.676 19:43:13 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:47.676 19:43:13 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:47.676 19:43:13 -- common/autobuild_common.sh@451 -- $ get_config_params 00:28:47.676 19:43:13 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:28:47.676 19:43:13 -- common/autotest_common.sh@10 -- $ set +x 00:28:47.677 19:43:13 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:28:47.677 19:43:13 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:28:47.677 19:43:13 -- pm/common@17 -- $ local monitor 00:28:47.677 19:43:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:47.677 19:43:13 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=86598 00:28:47.677 19:43:13 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:47.677 19:43:13 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=86599 00:28:47.677 19:43:13 -- pm/common@26 -- $ sleep 1 00:28:47.677 19:43:13 -- pm/common@21 -- $ date +%s 00:28:47.677 19:43:13 -- pm/common@21 -- $ date +%s 00:28:47.677 19:43:13 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713987793 00:28:47.677 19:43:13 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713987793 00:28:47.677 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713987793_collect-vmstat.pm.log 00:28:47.677 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713987793_collect-cpu-load.pm.log 00:28:48.612 19:43:14 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:28:48.612 19:43:14 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:28:48.612 19:43:14 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:28:48.612 19:43:14 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:48.612 19:43:14 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:48.612 19:43:14 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:48.612 19:43:14 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:48.612 19:43:14 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:48.612 19:43:14 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:48.612 19:43:14 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:48.612 19:43:14 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:48.612 19:43:14 -- pm/common@30 -- $ signal_monitor_resources TERM 00:28:48.612 19:43:14 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:28:48.612 19:43:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:48.612 19:43:14 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:28:48.612 19:43:14 -- pm/common@45 -- $ pid=86606 00:28:48.612 19:43:14 -- pm/common@52 -- $ sudo kill -TERM 86606 00:28:48.612 19:43:14 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:48.612 19:43:14 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:28:48.612 19:43:14 -- pm/common@45 -- $ pid=86607 00:28:48.612 19:43:14 -- pm/common@52 -- $ sudo kill -TERM 86607 00:28:48.612 + [[ -n 5349 ]] 00:28:48.612 + sudo kill 5349 00:28:48.622 [Pipeline] } 00:28:48.641 [Pipeline] // timeout 00:28:48.646 [Pipeline] } 00:28:48.665 [Pipeline] // stage 00:28:48.671 [Pipeline] } 00:28:48.686 [Pipeline] // catchError 00:28:48.695 [Pipeline] stage 00:28:48.698 [Pipeline] { (Stop VM) 00:28:48.712 [Pipeline] sh 00:28:48.991 + vagrant halt 00:28:51.523 ==> default: Halting domain... 00:28:58.109 [Pipeline] sh 00:28:58.384 + vagrant destroy -f 00:29:00.913 ==> default: Removing domain... 00:29:01.491 [Pipeline] sh 00:29:01.771 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:01.780 [Pipeline] } 00:29:01.797 [Pipeline] // stage 00:29:01.802 [Pipeline] } 00:29:01.818 [Pipeline] // dir 00:29:01.823 [Pipeline] } 00:29:01.840 [Pipeline] // wrap 00:29:01.846 [Pipeline] } 00:29:01.861 [Pipeline] // catchError 00:29:01.869 [Pipeline] stage 00:29:01.871 [Pipeline] { (Epilogue) 00:29:01.888 [Pipeline] sh 00:29:02.169 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:07.446 [Pipeline] catchError 00:29:07.448 [Pipeline] { 00:29:07.461 [Pipeline] sh 00:29:07.744 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:07.744 Artifacts sizes are good 00:29:07.754 [Pipeline] } 00:29:07.771 [Pipeline] // catchError 00:29:07.783 [Pipeline] archiveArtifacts 00:29:07.790 Archiving artifacts 00:29:07.921 [Pipeline] cleanWs 00:29:07.933 [WS-CLEANUP] Deleting project workspace... 00:29:07.933 [WS-CLEANUP] Deferred wipeout is used... 00:29:07.940 [WS-CLEANUP] done 00:29:07.942 [Pipeline] } 00:29:07.959 [Pipeline] // stage 00:29:07.965 [Pipeline] } 00:29:07.982 [Pipeline] // node 00:29:07.987 [Pipeline] End of Pipeline 00:29:08.027 Finished: SUCCESS