00:00:00.000 Started by upstream project "autotest-per-patch" build number 121036 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.051 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.052 The recommended git tool is: git 00:00:00.052 using credential 00000000-0000-0000-0000-000000000002 00:00:00.054 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.077 Fetching changes from the remote Git repository 00:00:00.078 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.123 Using shallow fetch with depth 1 00:00:00.123 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.123 > git --version # timeout=10 00:00:00.172 > git --version # 'git version 2.39.2' 00:00:00.172 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.173 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.173 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.264 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.276 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.287 Checking out Revision 6e1fadd1eee50389429f9abb33dde5face8ca717 (FETCH_HEAD) 00:00:04.288 > git config core.sparsecheckout # timeout=10 00:00:04.297 > git read-tree -mu HEAD # timeout=10 00:00:04.312 > git checkout -f 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=5 00:00:04.330 Commit message: "pool: attach build logs for failed merge builds" 00:00:04.330 > git rev-list --no-walk 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=10 00:00:04.430 [Pipeline] Start of Pipeline 00:00:04.443 [Pipeline] library 00:00:04.444 Loading library shm_lib@master 00:00:04.444 Library shm_lib@master is cached. Copying from home. 00:00:04.458 [Pipeline] node 00:00:04.463 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:04.466 [Pipeline] { 00:00:04.478 [Pipeline] catchError 00:00:04.480 [Pipeline] { 00:00:04.496 [Pipeline] wrap 00:00:04.505 [Pipeline] { 00:00:04.513 [Pipeline] stage 00:00:04.515 [Pipeline] { (Prologue) 00:00:04.531 [Pipeline] echo 00:00:04.532 Node: VM-host-WFP1 00:00:04.537 [Pipeline] cleanWs 00:00:04.547 [WS-CLEANUP] Deleting project workspace... 00:00:04.547 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.552 [WS-CLEANUP] done 00:00:04.702 [Pipeline] setCustomBuildProperty 00:00:04.754 [Pipeline] nodesByLabel 00:00:04.755 Found a total of 1 nodes with the 'sorcerer' label 00:00:04.764 [Pipeline] httpRequest 00:00:04.768 HttpMethod: GET 00:00:04.769 URL: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:04.773 Sending request to url: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:04.785 Response Code: HTTP/1.1 200 OK 00:00:04.785 Success: Status code 200 is in the accepted range: 200,404 00:00:04.785 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:08.176 [Pipeline] sh 00:00:08.460 + tar --no-same-owner -xf jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:08.485 [Pipeline] httpRequest 00:00:08.491 HttpMethod: GET 00:00:08.491 URL: http://10.211.164.96/packages/spdk_be7d3cb4601206320f6e6ac50006fb394fe209ac.tar.gz 00:00:08.492 Sending request to url: http://10.211.164.96/packages/spdk_be7d3cb4601206320f6e6ac50006fb394fe209ac.tar.gz 00:00:08.493 Response Code: HTTP/1.1 200 OK 00:00:08.494 Success: Status code 200 is in the accepted range: 200,404 00:00:08.494 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_be7d3cb4601206320f6e6ac50006fb394fe209ac.tar.gz 00:00:25.467 [Pipeline] sh 00:00:25.749 + tar --no-same-owner -xf spdk_be7d3cb4601206320f6e6ac50006fb394fe209ac.tar.gz 00:00:28.302 [Pipeline] sh 00:00:28.583 + git -C spdk log --oneline -n5 00:00:28.583 be7d3cb46 nvmf: rm deprecated param of nvmf_get_subsystems 00:00:28.583 4907d1565 lib/nvmf: deprecate [listen_]address.transport 00:00:28.583 ea150257d nvmf/rpc: fix input validation for nvmf_subsystem_add_listener 00:00:28.583 dd57ed3e8 sma: add listener check on vfio device creation 00:00:28.583 d36d2b7e8 doc: mark adrfam as optional 00:00:28.611 [Pipeline] writeFile 00:00:28.627 [Pipeline] sh 00:00:28.911 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:28.922 [Pipeline] sh 00:00:29.201 + cat autorun-spdk.conf 00:00:29.201 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.201 SPDK_TEST_NVME=1 00:00:29.201 SPDK_TEST_FTL=1 00:00:29.201 SPDK_TEST_ISAL=1 00:00:29.201 SPDK_RUN_ASAN=1 00:00:29.201 SPDK_RUN_UBSAN=1 00:00:29.201 SPDK_TEST_XNVME=1 00:00:29.201 SPDK_TEST_NVME_FDP=1 00:00:29.201 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:29.209 RUN_NIGHTLY=0 00:00:29.211 [Pipeline] } 00:00:29.229 [Pipeline] // stage 00:00:29.245 [Pipeline] stage 00:00:29.247 [Pipeline] { (Run VM) 00:00:29.262 [Pipeline] sh 00:00:29.546 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:29.546 + echo 'Start stage prepare_nvme.sh' 00:00:29.546 Start stage prepare_nvme.sh 00:00:29.546 + [[ -n 0 ]] 00:00:29.546 + disk_prefix=ex0 00:00:29.546 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:00:29.546 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:00:29.546 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:00:29.546 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:29.546 ++ SPDK_TEST_NVME=1 00:00:29.546 ++ SPDK_TEST_FTL=1 00:00:29.546 ++ SPDK_TEST_ISAL=1 00:00:29.546 ++ SPDK_RUN_ASAN=1 00:00:29.546 ++ SPDK_RUN_UBSAN=1 00:00:29.546 ++ SPDK_TEST_XNVME=1 00:00:29.546 ++ SPDK_TEST_NVME_FDP=1 00:00:29.546 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:29.546 ++ RUN_NIGHTLY=0 00:00:29.546 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:00:29.546 + nvme_files=() 00:00:29.546 + declare -A nvme_files 00:00:29.546 + backend_dir=/var/lib/libvirt/images/backends 00:00:29.546 + nvme_files['nvme.img']=5G 00:00:29.546 + nvme_files['nvme-cmb.img']=5G 00:00:29.546 + nvme_files['nvme-multi0.img']=4G 00:00:29.546 + nvme_files['nvme-multi1.img']=4G 00:00:29.546 + nvme_files['nvme-multi2.img']=4G 00:00:29.546 + nvme_files['nvme-openstack.img']=8G 00:00:29.546 + nvme_files['nvme-zns.img']=5G 00:00:29.546 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:29.546 + (( SPDK_TEST_FTL == 1 )) 00:00:29.546 + nvme_files["nvme-ftl.img"]=6G 00:00:29.546 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:29.546 + nvme_files["nvme-fdp.img"]=1G 00:00:29.546 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:29.546 + for nvme in "${!nvme_files[@]}" 00:00:29.546 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:29.546 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.546 + for nvme in "${!nvme_files[@]}" 00:00:29.546 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:00:29.546 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:29.546 + for nvme in "${!nvme_files[@]}" 00:00:29.546 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:29.546 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.546 + for nvme in "${!nvme_files[@]}" 00:00:29.546 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:29.546 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:29.546 + for nvme in "${!nvme_files[@]}" 00:00:29.546 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:29.546 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.546 + for nvme in "${!nvme_files[@]}" 00:00:29.546 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:29.806 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.806 + for nvme in "${!nvme_files[@]}" 00:00:29.806 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:29.806 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:29.806 + for nvme in "${!nvme_files[@]}" 00:00:29.806 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:00:29.806 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:29.806 + for nvme in "${!nvme_files[@]}" 00:00:29.806 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:29.806 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:29.806 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:29.806 + echo 'End stage prepare_nvme.sh' 00:00:29.806 End stage prepare_nvme.sh 00:00:29.820 [Pipeline] sh 00:00:30.109 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:30.109 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:00:30.109 00:00:30.109 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:00:30.109 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:00:30.109 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:00:30.109 HELP=0 00:00:30.109 DRY_RUN=0 00:00:30.109 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:00:30.109 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:30.109 NVME_AUTO_CREATE=0 00:00:30.109 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:00:30.109 NVME_CMB=,,,, 00:00:30.109 NVME_PMR=,,,, 00:00:30.109 NVME_ZNS=,,,, 00:00:30.109 NVME_MS=true,,,, 00:00:30.109 NVME_FDP=,,,on, 00:00:30.109 SPDK_VAGRANT_DISTRO=fedora38 00:00:30.109 SPDK_VAGRANT_VMCPU=10 00:00:30.109 SPDK_VAGRANT_VMRAM=12288 00:00:30.109 SPDK_VAGRANT_PROVIDER=libvirt 00:00:30.109 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:30.109 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:30.109 SPDK_OPENSTACK_NETWORK=0 00:00:30.109 VAGRANT_PACKAGE_BOX=0 00:00:30.109 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:30.109 FORCE_DISTRO=true 00:00:30.109 VAGRANT_BOX_VERSION= 00:00:30.109 EXTRA_VAGRANTFILES= 00:00:30.109 NIC_MODEL=e1000 00:00:30.109 00:00:30.109 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt' 00:00:30.109 /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:00:32.647 Bringing machine 'default' up with 'libvirt' provider... 00:00:34.022 ==> default: Creating image (snapshot of base box volume). 00:00:34.022 ==> default: Creating domain with the following settings... 00:00:34.022 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713989223_2e875c67cf7a94c8e5c3 00:00:34.022 ==> default: -- Domain type: kvm 00:00:34.022 ==> default: -- Cpus: 10 00:00:34.022 ==> default: -- Feature: acpi 00:00:34.022 ==> default: -- Feature: apic 00:00:34.022 ==> default: -- Feature: pae 00:00:34.022 ==> default: -- Memory: 12288M 00:00:34.022 ==> default: -- Memory Backing: hugepages: 00:00:34.022 ==> default: -- Management MAC: 00:00:34.022 ==> default: -- Loader: 00:00:34.022 ==> default: -- Nvram: 00:00:34.022 ==> default: -- Base box: spdk/fedora38 00:00:34.022 ==> default: -- Storage pool: default 00:00:34.022 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713989223_2e875c67cf7a94c8e5c3.img (20G) 00:00:34.022 ==> default: -- Volume Cache: default 00:00:34.022 ==> default: -- Kernel: 00:00:34.022 ==> default: -- Initrd: 00:00:34.022 ==> default: -- Graphics Type: vnc 00:00:34.022 ==> default: -- Graphics Port: -1 00:00:34.022 ==> default: -- Graphics IP: 127.0.0.1 00:00:34.022 ==> default: -- Graphics Password: Not defined 00:00:34.022 ==> default: -- Video Type: cirrus 00:00:34.022 ==> default: -- Video VRAM: 9216 00:00:34.022 ==> default: -- Sound Type: 00:00:34.022 ==> default: -- Keymap: en-us 00:00:34.022 ==> default: -- TPM Path: 00:00:34.022 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:34.022 ==> default: -- Command line args: 00:00:34.022 ==> default: -> value=-device, 00:00:34.022 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:34.022 ==> default: -> value=-drive, 00:00:34.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:34.022 ==> default: -> value=-device, 00:00:34.022 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:34.022 ==> default: -> value=-device, 00:00:34.022 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:34.022 ==> default: -> value=-drive, 00:00:34.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:00:34.022 ==> default: -> value=-device, 00:00:34.022 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.022 ==> default: -> value=-device, 00:00:34.022 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:34.022 ==> default: -> value=-drive, 00:00:34.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:34.022 ==> default: -> value=-device, 00:00:34.022 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.022 ==> default: -> value=-drive, 00:00:34.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:34.022 ==> default: -> value=-device, 00:00:34.022 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.022 ==> default: -> value=-drive, 00:00:34.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:34.022 ==> default: -> value=-device, 00:00:34.022 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.022 ==> default: -> value=-device, 00:00:34.022 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:34.022 ==> default: -> value=-device, 00:00:34.022 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:34.022 ==> default: -> value=-drive, 00:00:34.022 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:34.022 ==> default: -> value=-device, 00:00:34.022 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:34.281 ==> default: Creating shared folders metadata... 00:00:34.281 ==> default: Starting domain. 00:00:36.215 ==> default: Waiting for domain to get an IP address... 00:00:54.424 ==> default: Waiting for SSH to become available... 00:00:55.365 ==> default: Configuring and enabling network interfaces... 00:01:00.640 default: SSH address: 192.168.121.18:22 00:01:00.640 default: SSH username: vagrant 00:01:00.640 default: SSH auth method: private key 00:01:04.005 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:12.149 ==> default: Mounting SSHFS shared folder... 00:01:14.065 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:14.065 ==> default: Checking Mount.. 00:01:15.996 ==> default: Folder Successfully Mounted! 00:01:15.996 ==> default: Running provisioner: file... 00:01:16.564 default: ~/.gitconfig => .gitconfig 00:01:17.132 00:01:17.132 SUCCESS! 00:01:17.132 00:01:17.133 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:17.133 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:17.133 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:17.133 00:01:17.141 [Pipeline] } 00:01:17.160 [Pipeline] // stage 00:01:17.169 [Pipeline] dir 00:01:17.170 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora38-libvirt 00:01:17.171 [Pipeline] { 00:01:17.186 [Pipeline] catchError 00:01:17.188 [Pipeline] { 00:01:17.202 [Pipeline] sh 00:01:17.489 + vagrant ssh-config --host vagrant 00:01:17.489 + sed -ne /^Host/,$p 00:01:17.489 + tee ssh_conf 00:01:20.786 Host vagrant 00:01:20.786 HostName 192.168.121.18 00:01:20.786 User vagrant 00:01:20.786 Port 22 00:01:20.786 UserKnownHostsFile /dev/null 00:01:20.786 StrictHostKeyChecking no 00:01:20.786 PasswordAuthentication no 00:01:20.786 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:01:20.786 IdentitiesOnly yes 00:01:20.786 LogLevel FATAL 00:01:20.786 ForwardAgent yes 00:01:20.786 ForwardX11 yes 00:01:20.786 00:01:20.804 [Pipeline] withEnv 00:01:20.807 [Pipeline] { 00:01:20.824 [Pipeline] sh 00:01:21.120 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:21.120 source /etc/os-release 00:01:21.120 [[ -e /image.version ]] && img=$(< /image.version) 00:01:21.120 # Minimal, systemd-like check. 00:01:21.120 if [[ -e /.dockerenv ]]; then 00:01:21.120 # Clear garbage from the node's name: 00:01:21.120 # agt-er_autotest_547-896 -> autotest_547-896 00:01:21.120 # $HOSTNAME is the actual container id 00:01:21.120 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:21.120 if mountpoint -q /etc/hostname; then 00:01:21.120 # We can assume this is a mount from a host where container is running, 00:01:21.120 # so fetch its hostname to easily identify the target swarm worker. 00:01:21.120 container="$(< /etc/hostname) ($agent)" 00:01:21.120 else 00:01:21.120 # Fallback 00:01:21.120 container=$agent 00:01:21.120 fi 00:01:21.120 fi 00:01:21.120 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:21.120 00:01:21.395 [Pipeline] } 00:01:21.416 [Pipeline] // withEnv 00:01:21.426 [Pipeline] setCustomBuildProperty 00:01:21.443 [Pipeline] stage 00:01:21.445 [Pipeline] { (Tests) 00:01:21.459 [Pipeline] sh 00:01:21.750 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:22.024 [Pipeline] timeout 00:01:22.025 Timeout set to expire in 40 min 00:01:22.027 [Pipeline] { 00:01:22.045 [Pipeline] sh 00:01:22.327 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:22.895 HEAD is now at be7d3cb46 nvmf: rm deprecated param of nvmf_get_subsystems 00:01:22.909 [Pipeline] sh 00:01:23.191 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:23.472 [Pipeline] sh 00:01:23.754 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:24.028 [Pipeline] sh 00:01:24.316 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:01:24.575 ++ readlink -f spdk_repo 00:01:24.575 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:24.575 + [[ -n /home/vagrant/spdk_repo ]] 00:01:24.575 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:24.575 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:24.575 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:24.575 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:24.575 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:24.575 + cd /home/vagrant/spdk_repo 00:01:24.575 + source /etc/os-release 00:01:24.575 ++ NAME='Fedora Linux' 00:01:24.575 ++ VERSION='38 (Cloud Edition)' 00:01:24.575 ++ ID=fedora 00:01:24.575 ++ VERSION_ID=38 00:01:24.575 ++ VERSION_CODENAME= 00:01:24.575 ++ PLATFORM_ID=platform:f38 00:01:24.575 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:24.575 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:24.575 ++ LOGO=fedora-logo-icon 00:01:24.575 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:24.575 ++ HOME_URL=https://fedoraproject.org/ 00:01:24.575 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:24.575 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:24.575 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:24.575 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:24.575 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:24.575 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:24.575 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:24.575 ++ SUPPORT_END=2024-05-14 00:01:24.575 ++ VARIANT='Cloud Edition' 00:01:24.575 ++ VARIANT_ID=cloud 00:01:24.575 + uname -a 00:01:24.575 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:24.575 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:25.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:25.402 Hugepages 00:01:25.402 node hugesize free / total 00:01:25.402 node0 1048576kB 0 / 0 00:01:25.402 node0 2048kB 0 / 0 00:01:25.402 00:01:25.402 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:25.402 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:25.402 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:25.402 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:25.402 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:25.402 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:25.402 + rm -f /tmp/spdk-ld-path 00:01:25.402 + source autorun-spdk.conf 00:01:25.402 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.402 ++ SPDK_TEST_NVME=1 00:01:25.402 ++ SPDK_TEST_FTL=1 00:01:25.402 ++ SPDK_TEST_ISAL=1 00:01:25.402 ++ SPDK_RUN_ASAN=1 00:01:25.402 ++ SPDK_RUN_UBSAN=1 00:01:25.402 ++ SPDK_TEST_XNVME=1 00:01:25.402 ++ SPDK_TEST_NVME_FDP=1 00:01:25.402 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.402 ++ RUN_NIGHTLY=0 00:01:25.402 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:25.402 + [[ -n '' ]] 00:01:25.402 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:25.402 + for M in /var/spdk/build-*-manifest.txt 00:01:25.402 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:25.402 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:25.661 + for M in /var/spdk/build-*-manifest.txt 00:01:25.661 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:25.661 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:25.661 ++ uname 00:01:25.661 + [[ Linux == \L\i\n\u\x ]] 00:01:25.661 + sudo dmesg -T 00:01:25.661 + sudo dmesg --clear 00:01:25.661 + dmesg_pid=5140 00:01:25.661 + [[ Fedora Linux == FreeBSD ]] 00:01:25.661 + sudo dmesg -Tw 00:01:25.661 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.661 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.661 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:25.661 + [[ -x /usr/src/fio-static/fio ]] 00:01:25.661 + export FIO_BIN=/usr/src/fio-static/fio 00:01:25.661 + FIO_BIN=/usr/src/fio-static/fio 00:01:25.661 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:25.661 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:25.661 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:25.661 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.661 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.661 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:25.661 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.661 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.661 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:25.661 Test configuration: 00:01:25.661 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.661 SPDK_TEST_NVME=1 00:01:25.661 SPDK_TEST_FTL=1 00:01:25.661 SPDK_TEST_ISAL=1 00:01:25.661 SPDK_RUN_ASAN=1 00:01:25.661 SPDK_RUN_UBSAN=1 00:01:25.661 SPDK_TEST_XNVME=1 00:01:25.661 SPDK_TEST_NVME_FDP=1 00:01:25.661 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.661 RUN_NIGHTLY=0 20:07:55 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:25.661 20:07:55 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:25.661 20:07:55 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:25.661 20:07:55 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:25.661 20:07:55 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.661 20:07:55 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.661 20:07:55 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.661 20:07:55 -- paths/export.sh@5 -- $ export PATH 00:01:25.661 20:07:55 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.661 20:07:55 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:25.661 20:07:55 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:25.661 20:07:55 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713989275.XXXXXX 00:01:25.921 20:07:55 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713989275.XaFFzj 00:01:25.921 20:07:55 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:25.921 20:07:55 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:25.921 20:07:55 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:25.921 20:07:55 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:25.921 20:07:55 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:25.921 20:07:55 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:25.921 20:07:55 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:25.921 20:07:55 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.921 20:07:55 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:25.921 20:07:55 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:01:25.921 20:07:55 -- pm/common@17 -- $ local monitor 00:01:25.921 20:07:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.921 20:07:55 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5174 00:01:25.921 20:07:55 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.921 20:07:55 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5176 00:01:25.921 20:07:55 -- pm/common@26 -- $ sleep 1 00:01:25.921 20:07:55 -- pm/common@21 -- $ date +%s 00:01:25.921 20:07:55 -- pm/common@21 -- $ date +%s 00:01:25.921 20:07:55 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713989275 00:01:25.921 20:07:55 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713989275 00:01:25.921 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713989275_collect-vmstat.pm.log 00:01:25.921 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713989275_collect-cpu-load.pm.log 00:01:26.858 20:07:56 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:01:26.858 20:07:56 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:26.858 20:07:56 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:26.858 20:07:56 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:26.858 20:07:56 -- spdk/autobuild.sh@16 -- $ date -u 00:01:26.858 Wed Apr 24 08:07:56 PM UTC 2024 00:01:26.858 20:07:56 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:26.858 v24.05-pre-416-gbe7d3cb46 00:01:26.858 20:07:56 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:26.858 20:07:56 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:26.858 20:07:56 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:26.858 20:07:56 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:26.858 20:07:56 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.858 ************************************ 00:01:26.858 START TEST asan 00:01:26.858 ************************************ 00:01:26.858 using asan 00:01:26.858 20:07:57 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:01:26.858 00:01:26.858 real 0m0.000s 00:01:26.858 user 0m0.000s 00:01:26.858 sys 0m0.000s 00:01:26.858 20:07:57 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:26.858 ************************************ 00:01:26.858 END TEST asan 00:01:26.858 20:07:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:26.858 ************************************ 00:01:27.118 20:07:57 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:27.118 20:07:57 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:27.118 20:07:57 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:27.118 20:07:57 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:27.118 20:07:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.118 ************************************ 00:01:27.118 START TEST ubsan 00:01:27.118 ************************************ 00:01:27.118 using ubsan 00:01:27.118 20:07:57 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:01:27.118 00:01:27.118 real 0m0.000s 00:01:27.118 user 0m0.000s 00:01:27.118 sys 0m0.000s 00:01:27.118 20:07:57 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:27.118 20:07:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.118 ************************************ 00:01:27.118 END TEST ubsan 00:01:27.118 ************************************ 00:01:27.118 20:07:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:27.118 20:07:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:27.118 20:07:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:27.118 20:07:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:27.118 20:07:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:27.118 20:07:57 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:27.118 20:07:57 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:27.118 20:07:57 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:27.118 20:07:57 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:27.377 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:27.377 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:27.950 Using 'verbs' RDMA provider 00:01:43.824 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:01.919 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:01.919 Creating mk/config.mk...done. 00:02:01.919 Creating mk/cc.flags.mk...done. 00:02:01.919 Type 'make' to build. 00:02:01.919 20:08:29 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:01.919 20:08:29 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:01.919 20:08:29 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:01.919 20:08:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.919 ************************************ 00:02:01.919 START TEST make 00:02:01.919 ************************************ 00:02:01.919 20:08:29 -- common/autotest_common.sh@1111 -- $ make -j10 00:02:01.919 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:01.919 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:01.919 meson setup builddir \ 00:02:01.919 -Dwith-libaio=enabled \ 00:02:01.919 -Dwith-liburing=enabled \ 00:02:01.919 -Dwith-libvfn=disabled \ 00:02:01.919 -Dwith-spdk=false && \ 00:02:01.919 meson compile -C builddir && \ 00:02:01.919 cd -) 00:02:01.919 make[1]: Nothing to be done for 'all'. 00:02:02.485 The Meson build system 00:02:02.485 Version: 1.3.1 00:02:02.485 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:02.485 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:02.485 Build type: native build 00:02:02.485 Project name: xnvme 00:02:02.485 Project version: 0.7.3 00:02:02.485 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:02.485 C linker for the host machine: cc ld.bfd 2.39-16 00:02:02.485 Host machine cpu family: x86_64 00:02:02.485 Host machine cpu: x86_64 00:02:02.485 Message: host_machine.system: linux 00:02:02.485 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:02.485 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:02.485 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:02.485 Run-time dependency threads found: YES 00:02:02.485 Has header "setupapi.h" : NO 00:02:02.485 Has header "linux/blkzoned.h" : YES 00:02:02.485 Has header "linux/blkzoned.h" : YES (cached) 00:02:02.485 Has header "libaio.h" : YES 00:02:02.485 Library aio found: YES 00:02:02.485 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:02.485 Run-time dependency liburing found: YES 2.2 00:02:02.485 Dependency libvfn skipped: feature with-libvfn disabled 00:02:02.485 Run-time dependency appleframeworks found: NO (tried framework) 00:02:02.485 Run-time dependency appleframeworks found: NO (tried framework) 00:02:02.485 Configuring xnvme_config.h using configuration 00:02:02.485 Configuring xnvme.spec using configuration 00:02:02.485 Run-time dependency bash-completion found: YES 2.11 00:02:02.485 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:02.485 Program cp found: YES (/usr/bin/cp) 00:02:02.485 Has header "winsock2.h" : NO 00:02:02.485 Has header "dbghelp.h" : NO 00:02:02.485 Library rpcrt4 found: NO 00:02:02.485 Library rt found: YES 00:02:02.485 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:02.485 Found CMake: /usr/bin/cmake (3.27.7) 00:02:02.485 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:02.485 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:02.485 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:02.485 Build targets in project: 32 00:02:02.485 00:02:02.485 xnvme 0.7.3 00:02:02.485 00:02:02.485 User defined options 00:02:02.485 with-libaio : enabled 00:02:02.485 with-liburing: enabled 00:02:02.485 with-libvfn : disabled 00:02:02.485 with-spdk : false 00:02:02.485 00:02:02.485 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:02.779 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:02.780 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:03.037 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:03.037 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:03.037 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:03.037 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:03.037 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:03.037 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:03.037 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:03.037 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:03.037 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:03.037 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:03.037 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:03.037 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:03.037 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:03.037 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:03.037 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:03.037 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:03.037 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:03.296 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:03.296 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:03.296 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:03.296 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:03.296 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:03.296 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:03.296 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:03.296 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:03.296 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:03.296 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:03.296 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:03.296 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:03.296 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:03.296 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:03.296 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:03.296 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:03.296 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:03.296 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:03.296 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:03.296 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:03.296 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:03.296 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:03.296 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:03.296 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:03.296 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:03.296 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:03.296 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:03.296 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:03.296 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:03.296 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:03.296 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:03.296 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:03.296 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:03.296 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:03.555 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:03.555 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:03.555 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:03.555 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:03.555 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:03.555 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:03.555 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:03.555 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:03.555 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:03.555 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:03.555 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:03.555 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:03.555 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:03.555 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:03.555 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:03.555 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:03.813 [69/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:03.813 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:03.813 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:03.813 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:03.813 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:03.813 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:03.813 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:03.813 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:03.813 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:03.813 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:03.813 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:03.813 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:03.813 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:03.813 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:03.813 [83/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:03.813 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:04.072 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:04.072 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:04.072 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:04.072 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:04.072 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:04.072 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:04.072 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:04.072 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:04.072 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:04.072 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:04.072 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:04.072 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:04.072 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:04.072 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:04.072 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:04.072 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:04.072 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:04.072 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:04.072 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:04.072 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:04.072 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:04.072 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:04.072 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:04.072 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:04.072 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:04.329 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:04.330 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:04.330 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:04.330 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:04.330 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:04.330 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:04.330 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:04.330 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:04.330 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:04.330 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:04.330 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:04.330 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:04.330 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:04.330 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:04.330 [124/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:04.330 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:04.330 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:04.330 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:04.330 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:04.330 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:04.330 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:04.330 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:04.330 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:04.330 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:04.330 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:04.588 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:04.588 [136/203] Linking target lib/libxnvme.so 00:02:04.588 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:04.588 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:04.588 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:04.588 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:04.588 [141/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:04.588 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:04.588 [143/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:04.588 [144/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:04.588 [145/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:04.588 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:04.588 [147/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:04.588 [148/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:04.588 [149/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:04.589 [150/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:04.589 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:04.589 [152/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:04.846 [153/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:04.846 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:04.846 [155/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:04.846 [156/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:04.846 [157/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:04.846 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:04.846 [159/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:04.846 [160/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:04.846 [161/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:04.846 [162/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:04.846 [163/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:04.846 [164/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:04.846 [165/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:05.104 [166/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:05.104 [167/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:05.104 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:05.104 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:05.104 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:05.104 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:05.104 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:05.362 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:05.362 [174/203] Linking static target lib/libxnvme.a 00:02:05.362 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:05.362 [176/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:05.362 [177/203] Linking target tests/xnvme_tests_buf 00:02:05.362 [178/203] Linking target tests/xnvme_tests_lblk 00:02:05.362 [179/203] Linking target tests/xnvme_tests_cli 00:02:05.362 [180/203] Linking target tests/xnvme_tests_enum 00:02:05.362 [181/203] Linking target tests/xnvme_tests_ioworker 00:02:05.362 [182/203] Linking target tests/xnvme_tests_xnvme_file 00:02:05.362 [183/203] Linking target tests/xnvme_tests_znd_append 00:02:05.362 [184/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:05.362 [185/203] Linking target tests/xnvme_tests_znd_state 00:02:05.362 [186/203] Linking target tests/xnvme_tests_scc 00:02:05.362 [187/203] Linking target tests/xnvme_tests_kvs 00:02:05.362 [188/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:05.362 [189/203] Linking target tools/lblk 00:02:05.362 [190/203] Linking target examples/xnvme_dev 00:02:05.362 [191/203] Linking target tests/xnvme_tests_map 00:02:05.362 [192/203] Linking target tools/xdd 00:02:05.362 [193/203] Linking target tools/kvs 00:02:05.362 [194/203] Linking target tools/zoned 00:02:05.362 [195/203] Linking target tools/xnvme_file 00:02:05.362 [196/203] Linking target tools/xnvme 00:02:05.362 [197/203] Linking target examples/xnvme_hello 00:02:05.362 [198/203] Linking target examples/xnvme_enum 00:02:05.362 [199/203] Linking target examples/xnvme_single_async 00:02:05.362 [200/203] Linking target examples/xnvme_io_async 00:02:05.362 [201/203] Linking target examples/zoned_io_async 00:02:05.362 [202/203] Linking target examples/xnvme_single_sync 00:02:05.620 [203/203] Linking target examples/zoned_io_sync 00:02:05.620 INFO: autodetecting backend as ninja 00:02:05.620 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:05.620 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:12.247 The Meson build system 00:02:12.247 Version: 1.3.1 00:02:12.247 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:12.247 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:12.247 Build type: native build 00:02:12.247 Program cat found: YES (/usr/bin/cat) 00:02:12.247 Project name: DPDK 00:02:12.247 Project version: 23.11.0 00:02:12.247 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:12.247 C linker for the host machine: cc ld.bfd 2.39-16 00:02:12.247 Host machine cpu family: x86_64 00:02:12.247 Host machine cpu: x86_64 00:02:12.247 Message: ## Building in Developer Mode ## 00:02:12.247 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:12.247 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:12.247 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:12.247 Program python3 found: YES (/usr/bin/python3) 00:02:12.247 Program cat found: YES (/usr/bin/cat) 00:02:12.247 Compiler for C supports arguments -march=native: YES 00:02:12.247 Checking for size of "void *" : 8 00:02:12.247 Checking for size of "void *" : 8 (cached) 00:02:12.247 Library m found: YES 00:02:12.247 Library numa found: YES 00:02:12.247 Has header "numaif.h" : YES 00:02:12.247 Library fdt found: NO 00:02:12.247 Library execinfo found: NO 00:02:12.247 Has header "execinfo.h" : YES 00:02:12.247 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:12.247 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:12.247 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:12.247 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:12.247 Run-time dependency openssl found: YES 3.0.9 00:02:12.247 Run-time dependency libpcap found: YES 1.10.4 00:02:12.247 Has header "pcap.h" with dependency libpcap: YES 00:02:12.247 Compiler for C supports arguments -Wcast-qual: YES 00:02:12.247 Compiler for C supports arguments -Wdeprecated: YES 00:02:12.247 Compiler for C supports arguments -Wformat: YES 00:02:12.247 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:12.247 Compiler for C supports arguments -Wformat-security: NO 00:02:12.247 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.247 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:12.247 Compiler for C supports arguments -Wnested-externs: YES 00:02:12.247 Compiler for C supports arguments -Wold-style-definition: YES 00:02:12.247 Compiler for C supports arguments -Wpointer-arith: YES 00:02:12.247 Compiler for C supports arguments -Wsign-compare: YES 00:02:12.247 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:12.247 Compiler for C supports arguments -Wundef: YES 00:02:12.247 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.247 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:12.247 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:12.247 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.247 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:12.247 Program objdump found: YES (/usr/bin/objdump) 00:02:12.247 Compiler for C supports arguments -mavx512f: YES 00:02:12.247 Checking if "AVX512 checking" compiles: YES 00:02:12.247 Fetching value of define "__SSE4_2__" : 1 00:02:12.247 Fetching value of define "__AES__" : 1 00:02:12.247 Fetching value of define "__AVX__" : 1 00:02:12.247 Fetching value of define "__AVX2__" : 1 00:02:12.247 Fetching value of define "__AVX512BW__" : 1 00:02:12.247 Fetching value of define "__AVX512CD__" : 1 00:02:12.247 Fetching value of define "__AVX512DQ__" : 1 00:02:12.247 Fetching value of define "__AVX512F__" : 1 00:02:12.247 Fetching value of define "__AVX512VL__" : 1 00:02:12.247 Fetching value of define "__PCLMUL__" : 1 00:02:12.247 Fetching value of define "__RDRND__" : 1 00:02:12.247 Fetching value of define "__RDSEED__" : 1 00:02:12.247 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:12.247 Fetching value of define "__znver1__" : (undefined) 00:02:12.247 Fetching value of define "__znver2__" : (undefined) 00:02:12.247 Fetching value of define "__znver3__" : (undefined) 00:02:12.247 Fetching value of define "__znver4__" : (undefined) 00:02:12.247 Library asan found: YES 00:02:12.247 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:12.247 Message: lib/log: Defining dependency "log" 00:02:12.247 Message: lib/kvargs: Defining dependency "kvargs" 00:02:12.247 Message: lib/telemetry: Defining dependency "telemetry" 00:02:12.247 Library rt found: YES 00:02:12.247 Checking for function "getentropy" : NO 00:02:12.247 Message: lib/eal: Defining dependency "eal" 00:02:12.247 Message: lib/ring: Defining dependency "ring" 00:02:12.247 Message: lib/rcu: Defining dependency "rcu" 00:02:12.247 Message: lib/mempool: Defining dependency "mempool" 00:02:12.247 Message: lib/mbuf: Defining dependency "mbuf" 00:02:12.247 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:12.247 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.247 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.247 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:12.247 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:12.247 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:12.247 Compiler for C supports arguments -mpclmul: YES 00:02:12.247 Compiler for C supports arguments -maes: YES 00:02:12.247 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.247 Compiler for C supports arguments -mavx512bw: YES 00:02:12.247 Compiler for C supports arguments -mavx512dq: YES 00:02:12.247 Compiler for C supports arguments -mavx512vl: YES 00:02:12.247 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:12.247 Compiler for C supports arguments -mavx2: YES 00:02:12.247 Compiler for C supports arguments -mavx: YES 00:02:12.247 Message: lib/net: Defining dependency "net" 00:02:12.247 Message: lib/meter: Defining dependency "meter" 00:02:12.247 Message: lib/ethdev: Defining dependency "ethdev" 00:02:12.247 Message: lib/pci: Defining dependency "pci" 00:02:12.247 Message: lib/cmdline: Defining dependency "cmdline" 00:02:12.247 Message: lib/hash: Defining dependency "hash" 00:02:12.247 Message: lib/timer: Defining dependency "timer" 00:02:12.247 Message: lib/compressdev: Defining dependency "compressdev" 00:02:12.247 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:12.247 Message: lib/dmadev: Defining dependency "dmadev" 00:02:12.247 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:12.247 Message: lib/power: Defining dependency "power" 00:02:12.247 Message: lib/reorder: Defining dependency "reorder" 00:02:12.247 Message: lib/security: Defining dependency "security" 00:02:12.247 Has header "linux/userfaultfd.h" : YES 00:02:12.247 Has header "linux/vduse.h" : YES 00:02:12.247 Message: lib/vhost: Defining dependency "vhost" 00:02:12.247 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.247 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.247 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.247 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.247 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:12.247 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:12.247 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:12.247 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:12.247 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:12.247 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:12.247 Program doxygen found: YES (/usr/bin/doxygen) 00:02:12.247 Configuring doxy-api-html.conf using configuration 00:02:12.247 Configuring doxy-api-man.conf using configuration 00:02:12.247 Program mandb found: YES (/usr/bin/mandb) 00:02:12.247 Program sphinx-build found: NO 00:02:12.247 Configuring rte_build_config.h using configuration 00:02:12.247 Message: 00:02:12.247 ================= 00:02:12.247 Applications Enabled 00:02:12.247 ================= 00:02:12.247 00:02:12.247 apps: 00:02:12.247 00:02:12.247 00:02:12.247 Message: 00:02:12.247 ================= 00:02:12.247 Libraries Enabled 00:02:12.247 ================= 00:02:12.247 00:02:12.247 libs: 00:02:12.247 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.247 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:12.247 cryptodev, dmadev, power, reorder, security, vhost, 00:02:12.247 00:02:12.247 Message: 00:02:12.247 =============== 00:02:12.247 Drivers Enabled 00:02:12.247 =============== 00:02:12.247 00:02:12.247 common: 00:02:12.247 00:02:12.247 bus: 00:02:12.248 pci, vdev, 00:02:12.248 mempool: 00:02:12.248 ring, 00:02:12.248 dma: 00:02:12.248 00:02:12.248 net: 00:02:12.248 00:02:12.248 crypto: 00:02:12.248 00:02:12.248 compress: 00:02:12.248 00:02:12.248 vdpa: 00:02:12.248 00:02:12.248 00:02:12.248 Message: 00:02:12.248 ================= 00:02:12.248 Content Skipped 00:02:12.248 ================= 00:02:12.248 00:02:12.248 apps: 00:02:12.248 dumpcap: explicitly disabled via build config 00:02:12.248 graph: explicitly disabled via build config 00:02:12.248 pdump: explicitly disabled via build config 00:02:12.248 proc-info: explicitly disabled via build config 00:02:12.248 test-acl: explicitly disabled via build config 00:02:12.248 test-bbdev: explicitly disabled via build config 00:02:12.248 test-cmdline: explicitly disabled via build config 00:02:12.248 test-compress-perf: explicitly disabled via build config 00:02:12.248 test-crypto-perf: explicitly disabled via build config 00:02:12.248 test-dma-perf: explicitly disabled via build config 00:02:12.248 test-eventdev: explicitly disabled via build config 00:02:12.248 test-fib: explicitly disabled via build config 00:02:12.248 test-flow-perf: explicitly disabled via build config 00:02:12.248 test-gpudev: explicitly disabled via build config 00:02:12.248 test-mldev: explicitly disabled via build config 00:02:12.248 test-pipeline: explicitly disabled via build config 00:02:12.248 test-pmd: explicitly disabled via build config 00:02:12.248 test-regex: explicitly disabled via build config 00:02:12.248 test-sad: explicitly disabled via build config 00:02:12.248 test-security-perf: explicitly disabled via build config 00:02:12.248 00:02:12.248 libs: 00:02:12.248 metrics: explicitly disabled via build config 00:02:12.248 acl: explicitly disabled via build config 00:02:12.248 bbdev: explicitly disabled via build config 00:02:12.248 bitratestats: explicitly disabled via build config 00:02:12.248 bpf: explicitly disabled via build config 00:02:12.248 cfgfile: explicitly disabled via build config 00:02:12.248 distributor: explicitly disabled via build config 00:02:12.248 efd: explicitly disabled via build config 00:02:12.248 eventdev: explicitly disabled via build config 00:02:12.248 dispatcher: explicitly disabled via build config 00:02:12.248 gpudev: explicitly disabled via build config 00:02:12.248 gro: explicitly disabled via build config 00:02:12.248 gso: explicitly disabled via build config 00:02:12.248 ip_frag: explicitly disabled via build config 00:02:12.248 jobstats: explicitly disabled via build config 00:02:12.248 latencystats: explicitly disabled via build config 00:02:12.248 lpm: explicitly disabled via build config 00:02:12.248 member: explicitly disabled via build config 00:02:12.248 pcapng: explicitly disabled via build config 00:02:12.248 rawdev: explicitly disabled via build config 00:02:12.248 regexdev: explicitly disabled via build config 00:02:12.248 mldev: explicitly disabled via build config 00:02:12.248 rib: explicitly disabled via build config 00:02:12.248 sched: explicitly disabled via build config 00:02:12.248 stack: explicitly disabled via build config 00:02:12.248 ipsec: explicitly disabled via build config 00:02:12.248 pdcp: explicitly disabled via build config 00:02:12.248 fib: explicitly disabled via build config 00:02:12.248 port: explicitly disabled via build config 00:02:12.248 pdump: explicitly disabled via build config 00:02:12.248 table: explicitly disabled via build config 00:02:12.248 pipeline: explicitly disabled via build config 00:02:12.248 graph: explicitly disabled via build config 00:02:12.248 node: explicitly disabled via build config 00:02:12.248 00:02:12.248 drivers: 00:02:12.248 common/cpt: not in enabled drivers build config 00:02:12.248 common/dpaax: not in enabled drivers build config 00:02:12.248 common/iavf: not in enabled drivers build config 00:02:12.248 common/idpf: not in enabled drivers build config 00:02:12.248 common/mvep: not in enabled drivers build config 00:02:12.248 common/octeontx: not in enabled drivers build config 00:02:12.248 bus/auxiliary: not in enabled drivers build config 00:02:12.248 bus/cdx: not in enabled drivers build config 00:02:12.248 bus/dpaa: not in enabled drivers build config 00:02:12.248 bus/fslmc: not in enabled drivers build config 00:02:12.248 bus/ifpga: not in enabled drivers build config 00:02:12.248 bus/platform: not in enabled drivers build config 00:02:12.248 bus/vmbus: not in enabled drivers build config 00:02:12.248 common/cnxk: not in enabled drivers build config 00:02:12.248 common/mlx5: not in enabled drivers build config 00:02:12.248 common/nfp: not in enabled drivers build config 00:02:12.248 common/qat: not in enabled drivers build config 00:02:12.248 common/sfc_efx: not in enabled drivers build config 00:02:12.248 mempool/bucket: not in enabled drivers build config 00:02:12.248 mempool/cnxk: not in enabled drivers build config 00:02:12.248 mempool/dpaa: not in enabled drivers build config 00:02:12.248 mempool/dpaa2: not in enabled drivers build config 00:02:12.248 mempool/octeontx: not in enabled drivers build config 00:02:12.248 mempool/stack: not in enabled drivers build config 00:02:12.248 dma/cnxk: not in enabled drivers build config 00:02:12.248 dma/dpaa: not in enabled drivers build config 00:02:12.248 dma/dpaa2: not in enabled drivers build config 00:02:12.248 dma/hisilicon: not in enabled drivers build config 00:02:12.248 dma/idxd: not in enabled drivers build config 00:02:12.248 dma/ioat: not in enabled drivers build config 00:02:12.248 dma/skeleton: not in enabled drivers build config 00:02:12.248 net/af_packet: not in enabled drivers build config 00:02:12.248 net/af_xdp: not in enabled drivers build config 00:02:12.248 net/ark: not in enabled drivers build config 00:02:12.248 net/atlantic: not in enabled drivers build config 00:02:12.248 net/avp: not in enabled drivers build config 00:02:12.248 net/axgbe: not in enabled drivers build config 00:02:12.248 net/bnx2x: not in enabled drivers build config 00:02:12.248 net/bnxt: not in enabled drivers build config 00:02:12.248 net/bonding: not in enabled drivers build config 00:02:12.248 net/cnxk: not in enabled drivers build config 00:02:12.248 net/cpfl: not in enabled drivers build config 00:02:12.248 net/cxgbe: not in enabled drivers build config 00:02:12.248 net/dpaa: not in enabled drivers build config 00:02:12.248 net/dpaa2: not in enabled drivers build config 00:02:12.248 net/e1000: not in enabled drivers build config 00:02:12.248 net/ena: not in enabled drivers build config 00:02:12.248 net/enetc: not in enabled drivers build config 00:02:12.248 net/enetfec: not in enabled drivers build config 00:02:12.248 net/enic: not in enabled drivers build config 00:02:12.248 net/failsafe: not in enabled drivers build config 00:02:12.248 net/fm10k: not in enabled drivers build config 00:02:12.248 net/gve: not in enabled drivers build config 00:02:12.248 net/hinic: not in enabled drivers build config 00:02:12.248 net/hns3: not in enabled drivers build config 00:02:12.248 net/i40e: not in enabled drivers build config 00:02:12.248 net/iavf: not in enabled drivers build config 00:02:12.248 net/ice: not in enabled drivers build config 00:02:12.248 net/idpf: not in enabled drivers build config 00:02:12.248 net/igc: not in enabled drivers build config 00:02:12.248 net/ionic: not in enabled drivers build config 00:02:12.248 net/ipn3ke: not in enabled drivers build config 00:02:12.248 net/ixgbe: not in enabled drivers build config 00:02:12.248 net/mana: not in enabled drivers build config 00:02:12.248 net/memif: not in enabled drivers build config 00:02:12.248 net/mlx4: not in enabled drivers build config 00:02:12.248 net/mlx5: not in enabled drivers build config 00:02:12.248 net/mvneta: not in enabled drivers build config 00:02:12.248 net/mvpp2: not in enabled drivers build config 00:02:12.248 net/netvsc: not in enabled drivers build config 00:02:12.248 net/nfb: not in enabled drivers build config 00:02:12.248 net/nfp: not in enabled drivers build config 00:02:12.248 net/ngbe: not in enabled drivers build config 00:02:12.248 net/null: not in enabled drivers build config 00:02:12.248 net/octeontx: not in enabled drivers build config 00:02:12.248 net/octeon_ep: not in enabled drivers build config 00:02:12.248 net/pcap: not in enabled drivers build config 00:02:12.248 net/pfe: not in enabled drivers build config 00:02:12.248 net/qede: not in enabled drivers build config 00:02:12.248 net/ring: not in enabled drivers build config 00:02:12.248 net/sfc: not in enabled drivers build config 00:02:12.248 net/softnic: not in enabled drivers build config 00:02:12.248 net/tap: not in enabled drivers build config 00:02:12.248 net/thunderx: not in enabled drivers build config 00:02:12.248 net/txgbe: not in enabled drivers build config 00:02:12.248 net/vdev_netvsc: not in enabled drivers build config 00:02:12.248 net/vhost: not in enabled drivers build config 00:02:12.248 net/virtio: not in enabled drivers build config 00:02:12.248 net/vmxnet3: not in enabled drivers build config 00:02:12.248 raw/*: missing internal dependency, "rawdev" 00:02:12.248 crypto/armv8: not in enabled drivers build config 00:02:12.248 crypto/bcmfs: not in enabled drivers build config 00:02:12.248 crypto/caam_jr: not in enabled drivers build config 00:02:12.248 crypto/ccp: not in enabled drivers build config 00:02:12.248 crypto/cnxk: not in enabled drivers build config 00:02:12.249 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.249 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.249 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.249 crypto/mlx5: not in enabled drivers build config 00:02:12.249 crypto/mvsam: not in enabled drivers build config 00:02:12.249 crypto/nitrox: not in enabled drivers build config 00:02:12.249 crypto/null: not in enabled drivers build config 00:02:12.249 crypto/octeontx: not in enabled drivers build config 00:02:12.249 crypto/openssl: not in enabled drivers build config 00:02:12.249 crypto/scheduler: not in enabled drivers build config 00:02:12.249 crypto/uadk: not in enabled drivers build config 00:02:12.249 crypto/virtio: not in enabled drivers build config 00:02:12.249 compress/isal: not in enabled drivers build config 00:02:12.249 compress/mlx5: not in enabled drivers build config 00:02:12.249 compress/octeontx: not in enabled drivers build config 00:02:12.249 compress/zlib: not in enabled drivers build config 00:02:12.249 regex/*: missing internal dependency, "regexdev" 00:02:12.249 ml/*: missing internal dependency, "mldev" 00:02:12.249 vdpa/ifc: not in enabled drivers build config 00:02:12.249 vdpa/mlx5: not in enabled drivers build config 00:02:12.249 vdpa/nfp: not in enabled drivers build config 00:02:12.249 vdpa/sfc: not in enabled drivers build config 00:02:12.249 event/*: missing internal dependency, "eventdev" 00:02:12.249 baseband/*: missing internal dependency, "bbdev" 00:02:12.249 gpu/*: missing internal dependency, "gpudev" 00:02:12.249 00:02:12.249 00:02:12.249 Build targets in project: 85 00:02:12.249 00:02:12.249 DPDK 23.11.0 00:02:12.249 00:02:12.249 User defined options 00:02:12.249 buildtype : debug 00:02:12.249 default_library : shared 00:02:12.249 libdir : lib 00:02:12.249 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:12.249 b_sanitize : address 00:02:12.249 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:12.249 c_link_args : 00:02:12.249 cpu_instruction_set: native 00:02:12.249 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:12.249 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:12.249 enable_docs : false 00:02:12.249 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:12.249 enable_kmods : false 00:02:12.249 tests : false 00:02:12.249 00:02:12.249 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.249 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:12.249 [1/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:12.249 [2/265] Linking static target lib/librte_kvargs.a 00:02:12.249 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:12.249 [4/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:12.249 [5/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:12.249 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:12.249 [7/265] Linking static target lib/librte_log.a 00:02:12.508 [8/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:12.508 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:12.508 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:12.767 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.767 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:12.767 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:12.767 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:12.767 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:13.027 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:13.027 [17/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:13.027 [18/265] Linking static target lib/librte_telemetry.a 00:02:13.027 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:13.287 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:13.287 [21/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:13.287 [22/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.546 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:13.546 [24/265] Linking target lib/librte_log.so.24.0 00:02:13.546 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:13.546 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:13.546 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:13.546 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:13.805 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:13.805 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:13.805 [31/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:13.805 [32/265] Linking target lib/librte_kvargs.so.24.0 00:02:13.805 [33/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.062 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:14.062 [35/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:14.062 [36/265] Linking target lib/librte_telemetry.so.24.0 00:02:14.062 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:14.062 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:14.062 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:14.062 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:14.319 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:14.319 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:14.319 [43/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:14.319 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:14.319 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.319 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:14.319 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.577 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:14.577 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:14.577 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:14.834 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:14.834 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.834 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:14.834 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:14.834 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.094 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.094 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.094 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.094 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:15.094 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:15.352 [61/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:15.352 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:15.352 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:15.353 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:15.353 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:15.353 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:15.611 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:15.611 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:15.611 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:15.880 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:15.880 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:15.880 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:15.880 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:15.880 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:15.880 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:15.880 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:15.880 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.159 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.159 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.159 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.159 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.417 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.417 [83/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.417 [84/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.417 [85/265] Linking static target lib/librte_ring.a 00:02:16.676 [86/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:16.676 [87/265] Linking static target lib/librte_eal.a 00:02:16.676 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:16.676 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:16.935 [90/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:16.935 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:16.935 [92/265] Linking static target lib/librte_rcu.a 00:02:16.935 [93/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.935 [94/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:17.192 [95/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.192 [96/265] Linking static target lib/librte_mempool.a 00:02:17.192 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.450 [98/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:17.450 [99/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:17.709 [100/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.709 [101/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:17.967 [102/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:17.967 [103/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:17.967 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:18.227 [105/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:18.227 [106/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.227 [107/265] Linking static target lib/librte_mbuf.a 00:02:18.227 [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.227 [109/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:18.227 [110/265] Linking static target lib/librte_meter.a 00:02:18.227 [111/265] Linking static target lib/librte_net.a 00:02:18.486 [112/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.486 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.486 [114/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.486 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.745 [116/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.745 [117/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.745 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:19.003 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:19.003 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:19.261 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:19.261 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:19.520 [123/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.520 [124/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:19.520 [125/265] Linking static target lib/librte_pci.a 00:02:19.520 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.520 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:19.779 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:19.779 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.779 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.779 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.779 [132/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.779 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.779 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:20.038 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:20.038 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:20.038 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:20.038 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:20.038 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:20.038 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:20.038 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:20.038 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:20.297 [143/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:20.297 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.297 [145/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:20.555 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:20.555 [147/265] Linking static target lib/librte_cmdline.a 00:02:20.555 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.555 [149/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.555 [150/265] Linking static target lib/librte_timer.a 00:02:20.813 [151/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.813 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.813 [153/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.813 [154/265] Linking static target lib/librte_ethdev.a 00:02:21.072 [155/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:21.072 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:21.330 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:21.330 [158/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.330 [159/265] Linking static target lib/librte_compressdev.a 00:02:21.330 [160/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:21.330 [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:21.330 [162/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:21.589 [163/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:21.589 [164/265] Linking static target lib/librte_dmadev.a 00:02:21.589 [165/265] Linking static target lib/librte_hash.a 00:02:21.589 [166/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:21.848 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:21.848 [168/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:21.848 [169/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:22.107 [170/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:22.107 [171/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.107 [172/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.107 [173/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.107 [174/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:22.409 [175/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:22.409 [176/265] Linking static target lib/librte_cryptodev.a 00:02:22.409 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:22.409 [178/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:22.668 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:22.669 [180/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.669 [181/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:22.669 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:22.669 [183/265] Linking static target lib/librte_power.a 00:02:22.928 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:22.928 [185/265] Linking static target lib/librte_reorder.a 00:02:22.928 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:23.188 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:23.188 [188/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:23.188 [189/265] Linking static target lib/librte_security.a 00:02:23.446 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:23.446 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.706 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:23.965 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.965 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:23.965 [195/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.965 [196/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:24.223 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:24.223 [198/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:24.482 [199/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:24.483 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:24.483 [201/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:24.483 [202/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.483 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:24.741 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:24.741 [205/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:24.741 [206/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.741 [207/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:24.741 [208/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:24.741 [209/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:24.741 [210/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.741 [211/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:25.001 [212/265] Linking static target drivers/librte_bus_pci.a 00:02:25.001 [213/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:25.001 [214/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.001 [215/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:25.001 [216/265] Linking static target drivers/librte_bus_vdev.a 00:02:25.001 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:25.001 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:25.261 [219/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.261 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:25.261 [221/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.261 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:25.261 [223/265] Linking static target drivers/librte_mempool_ring.a 00:02:25.261 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.640 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:29.927 [226/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.927 [227/265] Linking target lib/librte_eal.so.24.0 00:02:29.927 [228/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:29.927 [229/265] Linking target lib/librte_meter.so.24.0 00:02:29.927 [230/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:29.927 [231/265] Linking target lib/librte_ring.so.24.0 00:02:29.927 [232/265] Linking target lib/librte_dmadev.so.24.0 00:02:29.927 [233/265] Linking target lib/librte_timer.so.24.0 00:02:29.927 [234/265] Linking target lib/librte_pci.so.24.0 00:02:29.927 [235/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:29.927 [236/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:29.927 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:29.927 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:29.927 [239/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:29.927 [240/265] Linking target lib/librte_rcu.so.24.0 00:02:29.927 [241/265] Linking target lib/librte_mempool.so.24.0 00:02:29.927 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:29.927 [243/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:30.187 [244/265] Linking static target lib/librte_vhost.a 00:02:30.187 [245/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.187 [246/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:30.187 [247/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:30.187 [248/265] Linking target lib/librte_mbuf.so.24.0 00:02:30.187 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:30.446 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:30.446 [251/265] Linking target lib/librte_net.so.24.0 00:02:30.446 [252/265] Linking target lib/librte_compressdev.so.24.0 00:02:30.446 [253/265] Linking target lib/librte_reorder.so.24.0 00:02:30.446 [254/265] Linking target lib/librte_cryptodev.so.24.0 00:02:30.446 [255/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:30.446 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:30.706 [257/265] Linking target lib/librte_hash.so.24.0 00:02:30.706 [258/265] Linking target lib/librte_security.so.24.0 00:02:30.706 [259/265] Linking target lib/librte_cmdline.so.24.0 00:02:30.706 [260/265] Linking target lib/librte_ethdev.so.24.0 00:02:30.706 [261/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:30.706 [262/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:30.964 [263/265] Linking target lib/librte_power.so.24.0 00:02:32.342 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.342 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:32.342 INFO: autodetecting backend as ninja 00:02:32.342 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:33.718 CC lib/ut_mock/mock.o 00:02:33.718 CC lib/ut/ut.o 00:02:33.718 CC lib/log/log_flags.o 00:02:33.718 CC lib/log/log.o 00:02:33.718 CC lib/log/log_deprecated.o 00:02:33.718 LIB libspdk_ut_mock.a 00:02:33.718 SO libspdk_ut_mock.so.6.0 00:02:33.718 LIB libspdk_ut.a 00:02:33.718 LIB libspdk_log.a 00:02:33.985 SO libspdk_ut.so.2.0 00:02:33.985 SYMLINK libspdk_ut_mock.so 00:02:33.985 SO libspdk_log.so.7.0 00:02:33.985 SYMLINK libspdk_ut.so 00:02:33.985 SYMLINK libspdk_log.so 00:02:34.244 CXX lib/trace_parser/trace.o 00:02:34.244 CC lib/ioat/ioat.o 00:02:34.244 CC lib/util/cpuset.o 00:02:34.244 CC lib/util/base64.o 00:02:34.244 CC lib/util/crc16.o 00:02:34.244 CC lib/util/crc32c.o 00:02:34.244 CC lib/util/bit_array.o 00:02:34.244 CC lib/util/crc32.o 00:02:34.244 CC lib/dma/dma.o 00:02:34.503 CC lib/vfio_user/host/vfio_user_pci.o 00:02:34.503 CC lib/util/crc32_ieee.o 00:02:34.503 CC lib/util/crc64.o 00:02:34.503 CC lib/util/dif.o 00:02:34.503 CC lib/util/fd.o 00:02:34.503 CC lib/vfio_user/host/vfio_user.o 00:02:34.503 LIB libspdk_dma.a 00:02:34.503 CC lib/util/file.o 00:02:34.503 SO libspdk_dma.so.4.0 00:02:34.503 CC lib/util/hexlify.o 00:02:34.503 LIB libspdk_ioat.a 00:02:34.503 CC lib/util/iov.o 00:02:34.503 SYMLINK libspdk_dma.so 00:02:34.503 CC lib/util/math.o 00:02:34.762 CC lib/util/pipe.o 00:02:34.762 SO libspdk_ioat.so.7.0 00:02:34.762 CC lib/util/strerror_tls.o 00:02:34.762 SYMLINK libspdk_ioat.so 00:02:34.762 CC lib/util/string.o 00:02:34.762 CC lib/util/uuid.o 00:02:34.762 LIB libspdk_vfio_user.a 00:02:34.762 CC lib/util/fd_group.o 00:02:34.762 CC lib/util/xor.o 00:02:34.762 SO libspdk_vfio_user.so.5.0 00:02:34.762 CC lib/util/zipf.o 00:02:34.762 SYMLINK libspdk_vfio_user.so 00:02:35.021 LIB libspdk_util.a 00:02:35.281 SO libspdk_util.so.9.0 00:02:35.281 LIB libspdk_trace_parser.a 00:02:35.281 SO libspdk_trace_parser.so.5.0 00:02:35.541 SYMLINK libspdk_util.so 00:02:35.541 SYMLINK libspdk_trace_parser.so 00:02:35.541 CC lib/idxd/idxd.o 00:02:35.541 CC lib/json/json_parse.o 00:02:35.541 CC lib/json/json_write.o 00:02:35.541 CC lib/json/json_util.o 00:02:35.541 CC lib/idxd/idxd_user.o 00:02:35.541 CC lib/rdma/common.o 00:02:35.541 CC lib/rdma/rdma_verbs.o 00:02:35.541 CC lib/vmd/vmd.o 00:02:35.800 CC lib/conf/conf.o 00:02:35.800 CC lib/env_dpdk/env.o 00:02:35.800 CC lib/env_dpdk/memory.o 00:02:35.800 CC lib/vmd/led.o 00:02:35.800 CC lib/env_dpdk/pci.o 00:02:35.800 CC lib/env_dpdk/init.o 00:02:35.800 LIB libspdk_conf.a 00:02:35.800 LIB libspdk_json.a 00:02:35.800 LIB libspdk_rdma.a 00:02:36.059 SO libspdk_conf.so.6.0 00:02:36.059 SO libspdk_json.so.6.0 00:02:36.059 SO libspdk_rdma.so.6.0 00:02:36.059 SYMLINK libspdk_conf.so 00:02:36.059 CC lib/env_dpdk/threads.o 00:02:36.059 CC lib/env_dpdk/pci_ioat.o 00:02:36.059 SYMLINK libspdk_json.so 00:02:36.059 SYMLINK libspdk_rdma.so 00:02:36.059 CC lib/env_dpdk/pci_virtio.o 00:02:36.059 CC lib/env_dpdk/pci_vmd.o 00:02:36.059 CC lib/env_dpdk/pci_idxd.o 00:02:36.059 CC lib/env_dpdk/pci_event.o 00:02:36.059 CC lib/jsonrpc/jsonrpc_server.o 00:02:36.318 LIB libspdk_idxd.a 00:02:36.318 CC lib/env_dpdk/sigbus_handler.o 00:02:36.318 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:36.318 CC lib/jsonrpc/jsonrpc_client.o 00:02:36.318 SO libspdk_idxd.so.12.0 00:02:36.318 CC lib/env_dpdk/pci_dpdk.o 00:02:36.318 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:36.318 LIB libspdk_vmd.a 00:02:36.318 SYMLINK libspdk_idxd.so 00:02:36.318 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:36.318 SO libspdk_vmd.so.6.0 00:02:36.318 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:36.318 SYMLINK libspdk_vmd.so 00:02:36.577 LIB libspdk_jsonrpc.a 00:02:36.577 SO libspdk_jsonrpc.so.6.0 00:02:36.577 SYMLINK libspdk_jsonrpc.so 00:02:37.151 CC lib/rpc/rpc.o 00:02:37.151 LIB libspdk_env_dpdk.a 00:02:37.409 LIB libspdk_rpc.a 00:02:37.409 SO libspdk_env_dpdk.so.14.0 00:02:37.409 SO libspdk_rpc.so.6.0 00:02:37.409 SYMLINK libspdk_rpc.so 00:02:37.409 SYMLINK libspdk_env_dpdk.so 00:02:37.975 CC lib/keyring/keyring_rpc.o 00:02:37.975 CC lib/keyring/keyring.o 00:02:37.975 CC lib/trace/trace.o 00:02:37.975 CC lib/trace/trace_flags.o 00:02:37.975 CC lib/trace/trace_rpc.o 00:02:37.975 CC lib/notify/notify.o 00:02:37.975 CC lib/notify/notify_rpc.o 00:02:37.975 LIB libspdk_notify.a 00:02:37.975 LIB libspdk_keyring.a 00:02:37.975 SO libspdk_notify.so.6.0 00:02:37.975 LIB libspdk_trace.a 00:02:37.975 SO libspdk_keyring.so.1.0 00:02:38.233 SYMLINK libspdk_notify.so 00:02:38.233 SO libspdk_trace.so.10.0 00:02:38.233 SYMLINK libspdk_keyring.so 00:02:38.233 SYMLINK libspdk_trace.so 00:02:38.492 CC lib/sock/sock.o 00:02:38.492 CC lib/sock/sock_rpc.o 00:02:38.492 CC lib/thread/thread.o 00:02:38.492 CC lib/thread/iobuf.o 00:02:39.057 LIB libspdk_sock.a 00:02:39.057 SO libspdk_sock.so.9.0 00:02:39.057 SYMLINK libspdk_sock.so 00:02:39.624 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:39.624 CC lib/nvme/nvme_ctrlr.o 00:02:39.624 CC lib/nvme/nvme_fabric.o 00:02:39.624 CC lib/nvme/nvme_ns.o 00:02:39.624 CC lib/nvme/nvme_pcie_common.o 00:02:39.624 CC lib/nvme/nvme_ns_cmd.o 00:02:39.624 CC lib/nvme/nvme_qpair.o 00:02:39.624 CC lib/nvme/nvme_pcie.o 00:02:39.624 CC lib/nvme/nvme.o 00:02:40.191 CC lib/nvme/nvme_quirks.o 00:02:40.191 CC lib/nvme/nvme_transport.o 00:02:40.191 LIB libspdk_thread.a 00:02:40.450 SO libspdk_thread.so.10.0 00:02:40.450 CC lib/nvme/nvme_discovery.o 00:02:40.450 SYMLINK libspdk_thread.so 00:02:40.450 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:40.450 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:40.450 CC lib/nvme/nvme_tcp.o 00:02:40.450 CC lib/nvme/nvme_opal.o 00:02:40.450 CC lib/nvme/nvme_io_msg.o 00:02:40.709 CC lib/nvme/nvme_poll_group.o 00:02:40.967 CC lib/accel/accel.o 00:02:40.967 CC lib/nvme/nvme_zns.o 00:02:40.967 CC lib/accel/accel_rpc.o 00:02:40.967 CC lib/accel/accel_sw.o 00:02:41.226 CC lib/blob/blobstore.o 00:02:41.226 CC lib/init/json_config.o 00:02:41.226 CC lib/virtio/virtio.o 00:02:41.226 CC lib/virtio/virtio_vhost_user.o 00:02:41.226 CC lib/virtio/virtio_vfio_user.o 00:02:41.484 CC lib/init/subsystem.o 00:02:41.484 CC lib/blob/request.o 00:02:41.484 CC lib/init/subsystem_rpc.o 00:02:41.484 CC lib/init/rpc.o 00:02:41.484 CC lib/nvme/nvme_stubs.o 00:02:41.484 CC lib/nvme/nvme_auth.o 00:02:41.484 CC lib/virtio/virtio_pci.o 00:02:41.484 CC lib/nvme/nvme_cuse.o 00:02:41.743 LIB libspdk_init.a 00:02:41.743 CC lib/blob/zeroes.o 00:02:41.743 SO libspdk_init.so.5.0 00:02:41.743 SYMLINK libspdk_init.so 00:02:41.743 CC lib/blob/blob_bs_dev.o 00:02:42.001 LIB libspdk_virtio.a 00:02:42.001 CC lib/nvme/nvme_rdma.o 00:02:42.001 SO libspdk_virtio.so.7.0 00:02:42.001 LIB libspdk_accel.a 00:02:42.001 SO libspdk_accel.so.15.0 00:02:42.001 SYMLINK libspdk_virtio.so 00:02:42.001 CC lib/event/app.o 00:02:42.001 CC lib/event/log_rpc.o 00:02:42.001 CC lib/event/app_rpc.o 00:02:42.001 CC lib/event/reactor.o 00:02:42.001 SYMLINK libspdk_accel.so 00:02:42.001 CC lib/event/scheduler_static.o 00:02:42.376 CC lib/bdev/bdev_zone.o 00:02:42.376 CC lib/bdev/bdev.o 00:02:42.376 CC lib/bdev/bdev_rpc.o 00:02:42.376 CC lib/bdev/part.o 00:02:42.376 CC lib/bdev/scsi_nvme.o 00:02:42.654 LIB libspdk_event.a 00:02:42.654 SO libspdk_event.so.13.0 00:02:42.913 SYMLINK libspdk_event.so 00:02:43.481 LIB libspdk_nvme.a 00:02:43.481 SO libspdk_nvme.so.13.0 00:02:44.046 SYMLINK libspdk_nvme.so 00:02:44.612 LIB libspdk_blob.a 00:02:44.612 SO libspdk_blob.so.11.0 00:02:44.870 SYMLINK libspdk_blob.so 00:02:45.128 CC lib/lvol/lvol.o 00:02:45.128 CC lib/blobfs/blobfs.o 00:02:45.128 CC lib/blobfs/tree.o 00:02:45.387 LIB libspdk_bdev.a 00:02:45.387 SO libspdk_bdev.so.15.0 00:02:45.645 SYMLINK libspdk_bdev.so 00:02:45.902 CC lib/ublk/ublk.o 00:02:45.902 CC lib/ublk/ublk_rpc.o 00:02:45.902 CC lib/scsi/dev.o 00:02:45.902 CC lib/scsi/lun.o 00:02:45.902 CC lib/scsi/port.o 00:02:45.902 CC lib/nbd/nbd.o 00:02:45.902 CC lib/nvmf/ctrlr.o 00:02:45.902 CC lib/ftl/ftl_core.o 00:02:45.902 CC lib/scsi/scsi.o 00:02:45.902 CC lib/scsi/scsi_bdev.o 00:02:46.160 CC lib/ftl/ftl_init.o 00:02:46.160 CC lib/ftl/ftl_layout.o 00:02:46.160 LIB libspdk_blobfs.a 00:02:46.160 CC lib/ftl/ftl_debug.o 00:02:46.160 SO libspdk_blobfs.so.10.0 00:02:46.160 LIB libspdk_lvol.a 00:02:46.160 SO libspdk_lvol.so.10.0 00:02:46.161 CC lib/nbd/nbd_rpc.o 00:02:46.161 CC lib/scsi/scsi_pr.o 00:02:46.161 SYMLINK libspdk_blobfs.so 00:02:46.161 CC lib/scsi/scsi_rpc.o 00:02:46.419 CC lib/scsi/task.o 00:02:46.419 SYMLINK libspdk_lvol.so 00:02:46.419 CC lib/ftl/ftl_io.o 00:02:46.419 CC lib/ftl/ftl_sb.o 00:02:46.419 CC lib/ftl/ftl_l2p.o 00:02:46.419 LIB libspdk_nbd.a 00:02:46.419 CC lib/ftl/ftl_l2p_flat.o 00:02:46.419 SO libspdk_nbd.so.7.0 00:02:46.419 CC lib/nvmf/ctrlr_discovery.o 00:02:46.678 LIB libspdk_ublk.a 00:02:46.678 SYMLINK libspdk_nbd.so 00:02:46.678 CC lib/ftl/ftl_nv_cache.o 00:02:46.678 CC lib/nvmf/ctrlr_bdev.o 00:02:46.678 CC lib/ftl/ftl_band.o 00:02:46.678 SO libspdk_ublk.so.3.0 00:02:46.678 CC lib/nvmf/subsystem.o 00:02:46.678 LIB libspdk_scsi.a 00:02:46.678 CC lib/ftl/ftl_band_ops.o 00:02:46.678 SYMLINK libspdk_ublk.so 00:02:46.678 CC lib/ftl/ftl_writer.o 00:02:46.678 CC lib/ftl/ftl_rq.o 00:02:46.678 SO libspdk_scsi.so.9.0 00:02:46.936 SYMLINK libspdk_scsi.so 00:02:46.936 CC lib/ftl/ftl_reloc.o 00:02:46.936 CC lib/ftl/ftl_l2p_cache.o 00:02:46.936 CC lib/nvmf/nvmf.o 00:02:46.936 CC lib/iscsi/conn.o 00:02:47.194 CC lib/nvmf/nvmf_rpc.o 00:02:47.194 CC lib/vhost/vhost.o 00:02:47.194 CC lib/ftl/ftl_p2l.o 00:02:47.452 CC lib/vhost/vhost_rpc.o 00:02:47.452 CC lib/iscsi/init_grp.o 00:02:47.710 CC lib/ftl/mngt/ftl_mngt.o 00:02:47.710 CC lib/iscsi/iscsi.o 00:02:47.710 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:47.710 CC lib/nvmf/transport.o 00:02:47.710 CC lib/iscsi/md5.o 00:02:47.710 CC lib/iscsi/param.o 00:02:47.970 CC lib/vhost/vhost_scsi.o 00:02:47.970 CC lib/iscsi/portal_grp.o 00:02:47.970 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:47.971 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:47.971 CC lib/iscsi/tgt_node.o 00:02:47.971 CC lib/iscsi/iscsi_subsystem.o 00:02:47.971 CC lib/vhost/vhost_blk.o 00:02:47.971 CC lib/vhost/rte_vhost_user.o 00:02:47.971 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:48.289 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:48.289 CC lib/nvmf/tcp.o 00:02:48.289 CC lib/nvmf/rdma.o 00:02:48.289 CC lib/iscsi/iscsi_rpc.o 00:02:48.289 CC lib/iscsi/task.o 00:02:48.547 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:48.547 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:48.547 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:48.547 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:48.547 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:48.806 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:48.806 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:48.806 CC lib/ftl/utils/ftl_conf.o 00:02:48.806 CC lib/ftl/utils/ftl_md.o 00:02:48.806 CC lib/ftl/utils/ftl_mempool.o 00:02:49.065 CC lib/ftl/utils/ftl_bitmap.o 00:02:49.065 CC lib/ftl/utils/ftl_property.o 00:02:49.065 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:49.065 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:49.065 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:49.065 LIB libspdk_vhost.a 00:02:49.325 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:49.325 LIB libspdk_iscsi.a 00:02:49.325 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:49.325 SO libspdk_vhost.so.8.0 00:02:49.325 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:49.325 SO libspdk_iscsi.so.8.0 00:02:49.325 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:49.325 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:49.325 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:49.325 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:49.325 SYMLINK libspdk_vhost.so 00:02:49.325 CC lib/ftl/base/ftl_base_dev.o 00:02:49.325 CC lib/ftl/base/ftl_base_bdev.o 00:02:49.585 CC lib/ftl/ftl_trace.o 00:02:49.585 SYMLINK libspdk_iscsi.so 00:02:49.845 LIB libspdk_ftl.a 00:02:50.104 SO libspdk_ftl.so.9.0 00:02:50.364 SYMLINK libspdk_ftl.so 00:02:50.932 LIB libspdk_nvmf.a 00:02:50.932 SO libspdk_nvmf.so.18.0 00:02:51.191 SYMLINK libspdk_nvmf.so 00:02:51.800 CC module/env_dpdk/env_dpdk_rpc.o 00:02:51.800 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:51.800 CC module/blob/bdev/blob_bdev.o 00:02:51.800 CC module/keyring/file/keyring.o 00:02:51.800 CC module/scheduler/gscheduler/gscheduler.o 00:02:51.800 CC module/sock/posix/posix.o 00:02:51.800 CC module/accel/error/accel_error.o 00:02:51.800 CC module/accel/ioat/accel_ioat.o 00:02:51.800 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:51.800 CC module/accel/dsa/accel_dsa.o 00:02:51.800 LIB libspdk_env_dpdk_rpc.a 00:02:51.800 SO libspdk_env_dpdk_rpc.so.6.0 00:02:51.800 CC module/keyring/file/keyring_rpc.o 00:02:51.800 LIB libspdk_scheduler_gscheduler.a 00:02:51.800 LIB libspdk_scheduler_dpdk_governor.a 00:02:51.800 SYMLINK libspdk_env_dpdk_rpc.so 00:02:51.800 CC module/accel/ioat/accel_ioat_rpc.o 00:02:51.800 SO libspdk_scheduler_gscheduler.so.4.0 00:02:52.087 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:52.087 CC module/accel/error/accel_error_rpc.o 00:02:52.087 LIB libspdk_scheduler_dynamic.a 00:02:52.087 CC module/accel/dsa/accel_dsa_rpc.o 00:02:52.087 SO libspdk_scheduler_dynamic.so.4.0 00:02:52.087 SYMLINK libspdk_scheduler_gscheduler.so 00:02:52.087 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:52.087 LIB libspdk_blob_bdev.a 00:02:52.087 LIB libspdk_keyring_file.a 00:02:52.087 SYMLINK libspdk_scheduler_dynamic.so 00:02:52.087 LIB libspdk_accel_ioat.a 00:02:52.087 SO libspdk_blob_bdev.so.11.0 00:02:52.087 SO libspdk_keyring_file.so.1.0 00:02:52.087 SO libspdk_accel_ioat.so.6.0 00:02:52.087 LIB libspdk_accel_dsa.a 00:02:52.087 LIB libspdk_accel_error.a 00:02:52.087 SYMLINK libspdk_blob_bdev.so 00:02:52.087 SO libspdk_accel_dsa.so.5.0 00:02:52.087 SO libspdk_accel_error.so.2.0 00:02:52.087 SYMLINK libspdk_accel_ioat.so 00:02:52.087 SYMLINK libspdk_keyring_file.so 00:02:52.087 SYMLINK libspdk_accel_error.so 00:02:52.087 SYMLINK libspdk_accel_dsa.so 00:02:52.087 CC module/accel/iaa/accel_iaa.o 00:02:52.087 CC module/accel/iaa/accel_iaa_rpc.o 00:02:52.350 CC module/bdev/delay/vbdev_delay.o 00:02:52.350 CC module/bdev/malloc/bdev_malloc.o 00:02:52.350 LIB libspdk_accel_iaa.a 00:02:52.350 CC module/bdev/null/bdev_null.o 00:02:52.350 CC module/blobfs/bdev/blobfs_bdev.o 00:02:52.350 CC module/bdev/gpt/gpt.o 00:02:52.350 CC module/bdev/lvol/vbdev_lvol.o 00:02:52.350 CC module/bdev/error/vbdev_error.o 00:02:52.350 SO libspdk_accel_iaa.so.3.0 00:02:52.609 LIB libspdk_sock_posix.a 00:02:52.609 SYMLINK libspdk_accel_iaa.so 00:02:52.609 CC module/bdev/nvme/bdev_nvme.o 00:02:52.609 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:52.609 SO libspdk_sock_posix.so.6.0 00:02:52.609 CC module/bdev/gpt/vbdev_gpt.o 00:02:52.609 SYMLINK libspdk_sock_posix.so 00:02:52.609 CC module/bdev/null/bdev_null_rpc.o 00:02:52.609 CC module/bdev/error/vbdev_error_rpc.o 00:02:52.609 LIB libspdk_blobfs_bdev.a 00:02:52.609 CC module/bdev/passthru/vbdev_passthru.o 00:02:52.868 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:52.868 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:52.868 SO libspdk_blobfs_bdev.so.6.0 00:02:52.868 CC module/bdev/raid/bdev_raid.o 00:02:52.868 LIB libspdk_bdev_error.a 00:02:52.868 SYMLINK libspdk_blobfs_bdev.so 00:02:52.868 CC module/bdev/raid/bdev_raid_rpc.o 00:02:52.868 LIB libspdk_bdev_null.a 00:02:52.868 SO libspdk_bdev_error.so.6.0 00:02:52.868 LIB libspdk_bdev_gpt.a 00:02:52.868 SO libspdk_bdev_null.so.6.0 00:02:52.868 SO libspdk_bdev_gpt.so.6.0 00:02:52.868 LIB libspdk_bdev_delay.a 00:02:52.868 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:52.868 LIB libspdk_bdev_malloc.a 00:02:52.868 SYMLINK libspdk_bdev_error.so 00:02:52.868 SO libspdk_bdev_delay.so.6.0 00:02:52.868 SYMLINK libspdk_bdev_null.so 00:02:52.868 SO libspdk_bdev_malloc.so.6.0 00:02:52.868 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:52.868 SYMLINK libspdk_bdev_gpt.so 00:02:52.868 CC module/bdev/nvme/nvme_rpc.o 00:02:53.127 SYMLINK libspdk_bdev_delay.so 00:02:53.127 CC module/bdev/raid/bdev_raid_sb.o 00:02:53.127 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:53.127 SYMLINK libspdk_bdev_malloc.so 00:02:53.127 CC module/bdev/raid/raid0.o 00:02:53.127 CC module/bdev/split/vbdev_split.o 00:02:53.127 LIB libspdk_bdev_passthru.a 00:02:53.127 CC module/bdev/nvme/bdev_mdns_client.o 00:02:53.127 SO libspdk_bdev_passthru.so.6.0 00:02:53.127 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:53.127 LIB libspdk_bdev_lvol.a 00:02:53.386 CC module/bdev/nvme/vbdev_opal.o 00:02:53.386 SO libspdk_bdev_lvol.so.6.0 00:02:53.386 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:53.386 SYMLINK libspdk_bdev_passthru.so 00:02:53.386 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:53.386 CC module/bdev/split/vbdev_split_rpc.o 00:02:53.386 SYMLINK libspdk_bdev_lvol.so 00:02:53.386 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:53.386 LIB libspdk_bdev_split.a 00:02:53.645 SO libspdk_bdev_split.so.6.0 00:02:53.645 CC module/bdev/raid/raid1.o 00:02:53.645 CC module/bdev/xnvme/bdev_xnvme.o 00:02:53.645 LIB libspdk_bdev_zone_block.a 00:02:53.645 SYMLINK libspdk_bdev_split.so 00:02:53.645 SO libspdk_bdev_zone_block.so.6.0 00:02:53.645 CC module/bdev/aio/bdev_aio.o 00:02:53.645 CC module/bdev/aio/bdev_aio_rpc.o 00:02:53.645 CC module/bdev/ftl/bdev_ftl.o 00:02:53.645 SYMLINK libspdk_bdev_zone_block.so 00:02:53.645 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:02:53.645 CC module/bdev/iscsi/bdev_iscsi.o 00:02:53.645 CC module/bdev/raid/concat.o 00:02:53.904 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:53.904 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:53.904 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:53.904 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:53.904 LIB libspdk_bdev_xnvme.a 00:02:53.904 SO libspdk_bdev_xnvme.so.3.0 00:02:53.904 SYMLINK libspdk_bdev_xnvme.so 00:02:53.904 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:53.904 LIB libspdk_bdev_aio.a 00:02:53.904 LIB libspdk_bdev_raid.a 00:02:53.904 SO libspdk_bdev_aio.so.6.0 00:02:53.904 LIB libspdk_bdev_ftl.a 00:02:54.164 SO libspdk_bdev_raid.so.6.0 00:02:54.164 SO libspdk_bdev_ftl.so.6.0 00:02:54.164 SYMLINK libspdk_bdev_aio.so 00:02:54.164 LIB libspdk_bdev_iscsi.a 00:02:54.164 SYMLINK libspdk_bdev_ftl.so 00:02:54.164 SYMLINK libspdk_bdev_raid.so 00:02:54.164 SO libspdk_bdev_iscsi.so.6.0 00:02:54.164 SYMLINK libspdk_bdev_iscsi.so 00:02:54.422 LIB libspdk_bdev_virtio.a 00:02:54.422 SO libspdk_bdev_virtio.so.6.0 00:02:54.422 SYMLINK libspdk_bdev_virtio.so 00:02:54.989 LIB libspdk_bdev_nvme.a 00:02:55.247 SO libspdk_bdev_nvme.so.7.0 00:02:55.247 SYMLINK libspdk_bdev_nvme.so 00:02:55.814 CC module/event/subsystems/vmd/vmd.o 00:02:55.814 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:56.073 CC module/event/subsystems/sock/sock.o 00:02:56.073 CC module/event/subsystems/scheduler/scheduler.o 00:02:56.073 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:56.073 CC module/event/subsystems/iobuf/iobuf.o 00:02:56.073 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:56.073 CC module/event/subsystems/keyring/keyring.o 00:02:56.073 LIB libspdk_event_sock.a 00:02:56.073 LIB libspdk_event_scheduler.a 00:02:56.073 LIB libspdk_event_keyring.a 00:02:56.073 LIB libspdk_event_vmd.a 00:02:56.073 SO libspdk_event_sock.so.5.0 00:02:56.073 SO libspdk_event_scheduler.so.4.0 00:02:56.073 LIB libspdk_event_iobuf.a 00:02:56.073 SO libspdk_event_keyring.so.1.0 00:02:56.073 LIB libspdk_event_vhost_blk.a 00:02:56.073 SO libspdk_event_vmd.so.6.0 00:02:56.073 SO libspdk_event_vhost_blk.so.3.0 00:02:56.073 SO libspdk_event_iobuf.so.3.0 00:02:56.073 SYMLINK libspdk_event_sock.so 00:02:56.073 SYMLINK libspdk_event_scheduler.so 00:02:56.073 SYMLINK libspdk_event_keyring.so 00:02:56.332 SYMLINK libspdk_event_vhost_blk.so 00:02:56.332 SYMLINK libspdk_event_vmd.so 00:02:56.332 SYMLINK libspdk_event_iobuf.so 00:02:56.591 CC module/event/subsystems/accel/accel.o 00:02:56.850 LIB libspdk_event_accel.a 00:02:56.850 SO libspdk_event_accel.so.6.0 00:02:56.850 SYMLINK libspdk_event_accel.so 00:02:57.419 CC module/event/subsystems/bdev/bdev.o 00:02:57.419 LIB libspdk_event_bdev.a 00:02:57.678 SO libspdk_event_bdev.so.6.0 00:02:57.678 SYMLINK libspdk_event_bdev.so 00:02:57.937 CC module/event/subsystems/scsi/scsi.o 00:02:57.937 CC module/event/subsystems/nbd/nbd.o 00:02:57.937 CC module/event/subsystems/ublk/ublk.o 00:02:57.937 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:57.937 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:58.195 LIB libspdk_event_scsi.a 00:02:58.195 LIB libspdk_event_nbd.a 00:02:58.195 LIB libspdk_event_ublk.a 00:02:58.195 SO libspdk_event_scsi.so.6.0 00:02:58.195 SO libspdk_event_nbd.so.6.0 00:02:58.195 SO libspdk_event_ublk.so.3.0 00:02:58.195 SYMLINK libspdk_event_scsi.so 00:02:58.195 SYMLINK libspdk_event_nbd.so 00:02:58.195 LIB libspdk_event_nvmf.a 00:02:58.195 SYMLINK libspdk_event_ublk.so 00:02:58.452 SO libspdk_event_nvmf.so.6.0 00:02:58.452 SYMLINK libspdk_event_nvmf.so 00:02:58.710 CC module/event/subsystems/iscsi/iscsi.o 00:02:58.710 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:58.710 LIB libspdk_event_vhost_scsi.a 00:02:58.710 LIB libspdk_event_iscsi.a 00:02:58.968 SO libspdk_event_iscsi.so.6.0 00:02:58.968 SO libspdk_event_vhost_scsi.so.3.0 00:02:58.968 SYMLINK libspdk_event_vhost_scsi.so 00:02:58.968 SYMLINK libspdk_event_iscsi.so 00:02:59.225 SO libspdk.so.6.0 00:02:59.225 SYMLINK libspdk.so 00:02:59.483 CXX app/trace/trace.o 00:02:59.483 CC app/trace_record/trace_record.o 00:02:59.483 CC app/spdk_lspci/spdk_lspci.o 00:02:59.483 CC app/spdk_nvme_perf/perf.o 00:02:59.483 CC app/spdk_nvme_identify/identify.o 00:02:59.483 CC app/nvmf_tgt/nvmf_main.o 00:02:59.483 CC app/spdk_tgt/spdk_tgt.o 00:02:59.483 CC app/iscsi_tgt/iscsi_tgt.o 00:02:59.741 CC examples/accel/perf/accel_perf.o 00:02:59.741 LINK spdk_lspci 00:02:59.741 CC test/accel/dif/dif.o 00:02:59.741 LINK nvmf_tgt 00:02:59.741 LINK spdk_trace_record 00:02:59.741 LINK iscsi_tgt 00:02:59.999 LINK spdk_tgt 00:02:59.999 LINK spdk_trace 00:02:59.999 CC app/spdk_nvme_discover/discovery_aer.o 00:03:00.257 LINK dif 00:03:00.257 LINK accel_perf 00:03:00.257 CC test/app/bdev_svc/bdev_svc.o 00:03:00.257 LINK spdk_nvme_discover 00:03:00.257 CC test/bdev/bdevio/bdevio.o 00:03:00.257 CC app/spdk_top/spdk_top.o 00:03:00.257 CC examples/bdev/hello_world/hello_bdev.o 00:03:00.257 CC examples/blob/hello_world/hello_blob.o 00:03:00.515 LINK bdev_svc 00:03:00.515 LINK spdk_nvme_perf 00:03:00.515 LINK spdk_nvme_identify 00:03:00.515 CC app/vhost/vhost.o 00:03:00.515 LINK hello_bdev 00:03:00.515 CC examples/ioat/perf/perf.o 00:03:00.515 CC examples/nvme/hello_world/hello_world.o 00:03:00.515 LINK hello_blob 00:03:00.774 LINK bdevio 00:03:00.774 CC examples/ioat/verify/verify.o 00:03:00.774 LINK vhost 00:03:00.774 LINK ioat_perf 00:03:00.774 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:00.774 LINK hello_world 00:03:01.033 CC examples/sock/hello_world/hello_sock.o 00:03:01.033 CC examples/bdev/bdevperf/bdevperf.o 00:03:01.033 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:01.033 LINK verify 00:03:01.033 CC examples/blob/cli/blobcli.o 00:03:01.033 CC examples/nvme/reconnect/reconnect.o 00:03:01.292 LINK hello_sock 00:03:01.292 CC app/spdk_dd/spdk_dd.o 00:03:01.292 CC examples/vmd/lsvmd/lsvmd.o 00:03:01.292 LINK spdk_top 00:03:01.292 CC examples/vmd/led/led.o 00:03:01.292 LINK nvme_fuzz 00:03:01.292 LINK lsvmd 00:03:01.550 LINK led 00:03:01.550 LINK reconnect 00:03:01.550 CC test/app/histogram_perf/histogram_perf.o 00:03:01.550 CC app/fio/nvme/fio_plugin.o 00:03:01.550 LINK blobcli 00:03:01.550 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:01.809 LINK spdk_dd 00:03:01.809 LINK histogram_perf 00:03:01.809 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:01.809 CC test/app/jsoncat/jsoncat.o 00:03:01.809 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:01.809 CC test/blobfs/mkfs/mkfs.o 00:03:01.809 LINK bdevperf 00:03:02.069 LINK jsoncat 00:03:02.069 CC app/fio/bdev/fio_plugin.o 00:03:02.069 LINK mkfs 00:03:02.069 CC test/app/stub/stub.o 00:03:02.069 TEST_HEADER include/spdk/accel.h 00:03:02.069 TEST_HEADER include/spdk/accel_module.h 00:03:02.069 TEST_HEADER include/spdk/assert.h 00:03:02.069 TEST_HEADER include/spdk/barrier.h 00:03:02.069 TEST_HEADER include/spdk/base64.h 00:03:02.069 TEST_HEADER include/spdk/bdev.h 00:03:02.069 TEST_HEADER include/spdk/bdev_module.h 00:03:02.069 TEST_HEADER include/spdk/bdev_zone.h 00:03:02.069 TEST_HEADER include/spdk/bit_array.h 00:03:02.069 TEST_HEADER include/spdk/bit_pool.h 00:03:02.069 TEST_HEADER include/spdk/blob_bdev.h 00:03:02.069 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:02.069 TEST_HEADER include/spdk/blobfs.h 00:03:02.069 TEST_HEADER include/spdk/blob.h 00:03:02.069 TEST_HEADER include/spdk/conf.h 00:03:02.069 TEST_HEADER include/spdk/config.h 00:03:02.069 TEST_HEADER include/spdk/cpuset.h 00:03:02.069 TEST_HEADER include/spdk/crc16.h 00:03:02.069 TEST_HEADER include/spdk/crc32.h 00:03:02.069 TEST_HEADER include/spdk/crc64.h 00:03:02.069 TEST_HEADER include/spdk/dif.h 00:03:02.069 TEST_HEADER include/spdk/dma.h 00:03:02.069 TEST_HEADER include/spdk/endian.h 00:03:02.069 TEST_HEADER include/spdk/env_dpdk.h 00:03:02.069 TEST_HEADER include/spdk/env.h 00:03:02.069 TEST_HEADER include/spdk/event.h 00:03:02.069 TEST_HEADER include/spdk/fd_group.h 00:03:02.069 TEST_HEADER include/spdk/fd.h 00:03:02.069 TEST_HEADER include/spdk/file.h 00:03:02.069 TEST_HEADER include/spdk/ftl.h 00:03:02.069 TEST_HEADER include/spdk/gpt_spec.h 00:03:02.069 TEST_HEADER include/spdk/hexlify.h 00:03:02.069 TEST_HEADER include/spdk/histogram_data.h 00:03:02.069 TEST_HEADER include/spdk/idxd.h 00:03:02.069 TEST_HEADER include/spdk/idxd_spec.h 00:03:02.069 TEST_HEADER include/spdk/init.h 00:03:02.069 TEST_HEADER include/spdk/ioat.h 00:03:02.069 TEST_HEADER include/spdk/ioat_spec.h 00:03:02.069 TEST_HEADER include/spdk/iscsi_spec.h 00:03:02.069 TEST_HEADER include/spdk/json.h 00:03:02.069 TEST_HEADER include/spdk/jsonrpc.h 00:03:02.069 TEST_HEADER include/spdk/keyring.h 00:03:02.069 TEST_HEADER include/spdk/keyring_module.h 00:03:02.069 TEST_HEADER include/spdk/likely.h 00:03:02.069 TEST_HEADER include/spdk/log.h 00:03:02.069 TEST_HEADER include/spdk/lvol.h 00:03:02.069 TEST_HEADER include/spdk/memory.h 00:03:02.069 TEST_HEADER include/spdk/mmio.h 00:03:02.069 TEST_HEADER include/spdk/nbd.h 00:03:02.328 TEST_HEADER include/spdk/notify.h 00:03:02.328 TEST_HEADER include/spdk/nvme.h 00:03:02.328 TEST_HEADER include/spdk/nvme_intel.h 00:03:02.328 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:02.328 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:02.328 TEST_HEADER include/spdk/nvme_spec.h 00:03:02.328 TEST_HEADER include/spdk/nvme_zns.h 00:03:02.328 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:02.328 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:02.328 TEST_HEADER include/spdk/nvmf.h 00:03:02.328 TEST_HEADER include/spdk/nvmf_spec.h 00:03:02.328 CC examples/nvme/arbitration/arbitration.o 00:03:02.328 TEST_HEADER include/spdk/nvmf_transport.h 00:03:02.328 TEST_HEADER include/spdk/opal.h 00:03:02.328 TEST_HEADER include/spdk/opal_spec.h 00:03:02.328 TEST_HEADER include/spdk/pci_ids.h 00:03:02.328 TEST_HEADER include/spdk/pipe.h 00:03:02.328 TEST_HEADER include/spdk/queue.h 00:03:02.328 TEST_HEADER include/spdk/reduce.h 00:03:02.328 TEST_HEADER include/spdk/rpc.h 00:03:02.328 TEST_HEADER include/spdk/scheduler.h 00:03:02.328 TEST_HEADER include/spdk/scsi.h 00:03:02.328 TEST_HEADER include/spdk/scsi_spec.h 00:03:02.328 TEST_HEADER include/spdk/sock.h 00:03:02.328 TEST_HEADER include/spdk/stdinc.h 00:03:02.328 LINK spdk_nvme 00:03:02.328 TEST_HEADER include/spdk/string.h 00:03:02.328 TEST_HEADER include/spdk/thread.h 00:03:02.328 LINK stub 00:03:02.328 TEST_HEADER include/spdk/trace.h 00:03:02.328 TEST_HEADER include/spdk/trace_parser.h 00:03:02.328 TEST_HEADER include/spdk/tree.h 00:03:02.328 TEST_HEADER include/spdk/ublk.h 00:03:02.328 TEST_HEADER include/spdk/util.h 00:03:02.328 TEST_HEADER include/spdk/uuid.h 00:03:02.328 TEST_HEADER include/spdk/version.h 00:03:02.328 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:02.328 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:02.328 TEST_HEADER include/spdk/vhost.h 00:03:02.328 TEST_HEADER include/spdk/vmd.h 00:03:02.328 TEST_HEADER include/spdk/xor.h 00:03:02.328 TEST_HEADER include/spdk/zipf.h 00:03:02.328 CXX test/cpp_headers/accel.o 00:03:02.328 LINK vhost_fuzz 00:03:02.328 LINK nvme_manage 00:03:02.328 CC test/dma/test_dma/test_dma.o 00:03:02.586 CXX test/cpp_headers/accel_module.o 00:03:02.586 LINK arbitration 00:03:02.586 CC test/event/event_perf/event_perf.o 00:03:02.587 LINK spdk_bdev 00:03:02.587 CXX test/cpp_headers/assert.o 00:03:02.587 CC test/env/mem_callbacks/mem_callbacks.o 00:03:02.845 CC test/rpc_client/rpc_client_test.o 00:03:02.845 CC test/lvol/esnap/esnap.o 00:03:02.845 LINK event_perf 00:03:02.845 CC test/nvme/aer/aer.o 00:03:02.845 LINK test_dma 00:03:02.845 CXX test/cpp_headers/barrier.o 00:03:02.845 CC test/env/vtophys/vtophys.o 00:03:02.845 CC examples/nvme/hotplug/hotplug.o 00:03:02.845 LINK rpc_client_test 00:03:03.102 CC test/event/reactor/reactor.o 00:03:03.102 CXX test/cpp_headers/base64.o 00:03:03.102 LINK vtophys 00:03:03.102 LINK aer 00:03:03.102 LINK hotplug 00:03:03.102 LINK reactor 00:03:03.102 LINK iscsi_fuzz 00:03:03.361 CXX test/cpp_headers/bdev.o 00:03:03.361 LINK mem_callbacks 00:03:03.361 CC examples/nvmf/nvmf/nvmf.o 00:03:03.361 CC examples/util/zipf/zipf.o 00:03:03.361 CC test/event/reactor_perf/reactor_perf.o 00:03:03.361 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:03.361 CXX test/cpp_headers/bdev_module.o 00:03:03.361 CC test/nvme/reset/reset.o 00:03:03.361 CC examples/thread/thread/thread_ex.o 00:03:03.619 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:03.619 LINK zipf 00:03:03.619 CC examples/nvme/abort/abort.o 00:03:03.619 LINK reactor_perf 00:03:03.619 LINK cmb_copy 00:03:03.619 LINK nvmf 00:03:03.619 CXX test/cpp_headers/bdev_zone.o 00:03:03.619 LINK env_dpdk_post_init 00:03:03.878 LINK reset 00:03:03.878 LINK thread 00:03:03.878 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:03.878 CXX test/cpp_headers/bit_array.o 00:03:03.878 CC test/event/app_repeat/app_repeat.o 00:03:03.878 CC test/event/scheduler/scheduler.o 00:03:03.878 LINK abort 00:03:04.138 CC test/env/memory/memory_ut.o 00:03:04.138 LINK pmr_persistence 00:03:04.138 CXX test/cpp_headers/bit_pool.o 00:03:04.138 LINK app_repeat 00:03:04.138 CC test/nvme/sgl/sgl.o 00:03:04.138 CC test/thread/poller_perf/poller_perf.o 00:03:04.138 CC test/env/pci/pci_ut.o 00:03:04.138 LINK scheduler 00:03:04.138 CXX test/cpp_headers/blob_bdev.o 00:03:04.138 LINK poller_perf 00:03:04.138 CXX test/cpp_headers/blobfs_bdev.o 00:03:04.397 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:04.397 CC examples/idxd/perf/perf.o 00:03:04.397 LINK sgl 00:03:04.397 CXX test/cpp_headers/blobfs.o 00:03:04.397 CXX test/cpp_headers/blob.o 00:03:04.397 CC test/nvme/e2edp/nvme_dp.o 00:03:04.397 LINK interrupt_tgt 00:03:04.656 LINK pci_ut 00:03:04.656 CC test/nvme/overhead/overhead.o 00:03:04.656 CXX test/cpp_headers/conf.o 00:03:04.656 CC test/nvme/err_injection/err_injection.o 00:03:04.656 CC test/nvme/startup/startup.o 00:03:04.656 LINK idxd_perf 00:03:04.914 CXX test/cpp_headers/config.o 00:03:04.914 LINK nvme_dp 00:03:04.914 CC test/nvme/reserve/reserve.o 00:03:04.914 CXX test/cpp_headers/cpuset.o 00:03:04.914 LINK err_injection 00:03:04.914 CXX test/cpp_headers/crc16.o 00:03:04.914 LINK startup 00:03:04.914 LINK memory_ut 00:03:04.914 LINK overhead 00:03:04.914 CXX test/cpp_headers/crc32.o 00:03:04.914 CC test/nvme/simple_copy/simple_copy.o 00:03:04.914 CXX test/cpp_headers/crc64.o 00:03:05.173 LINK reserve 00:03:05.173 CC test/nvme/connect_stress/connect_stress.o 00:03:05.173 CC test/nvme/boot_partition/boot_partition.o 00:03:05.173 CC test/nvme/compliance/nvme_compliance.o 00:03:05.173 CXX test/cpp_headers/dif.o 00:03:05.173 CC test/nvme/fused_ordering/fused_ordering.o 00:03:05.173 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:05.173 LINK simple_copy 00:03:05.173 LINK connect_stress 00:03:05.173 LINK boot_partition 00:03:05.431 CC test/nvme/fdp/fdp.o 00:03:05.431 CXX test/cpp_headers/dma.o 00:03:05.431 CC test/nvme/cuse/cuse.o 00:03:05.431 LINK fused_ordering 00:03:05.431 LINK doorbell_aers 00:03:05.431 CXX test/cpp_headers/endian.o 00:03:05.431 CXX test/cpp_headers/env_dpdk.o 00:03:05.431 CXX test/cpp_headers/env.o 00:03:05.431 CXX test/cpp_headers/event.o 00:03:05.431 LINK nvme_compliance 00:03:05.690 CXX test/cpp_headers/fd_group.o 00:03:05.690 CXX test/cpp_headers/fd.o 00:03:05.690 CXX test/cpp_headers/file.o 00:03:05.690 CXX test/cpp_headers/ftl.o 00:03:05.690 CXX test/cpp_headers/gpt_spec.o 00:03:05.690 LINK fdp 00:03:05.690 CXX test/cpp_headers/hexlify.o 00:03:05.690 CXX test/cpp_headers/histogram_data.o 00:03:05.690 CXX test/cpp_headers/idxd.o 00:03:05.690 CXX test/cpp_headers/idxd_spec.o 00:03:05.690 CXX test/cpp_headers/init.o 00:03:05.690 CXX test/cpp_headers/ioat.o 00:03:05.952 CXX test/cpp_headers/ioat_spec.o 00:03:05.952 CXX test/cpp_headers/iscsi_spec.o 00:03:05.952 CXX test/cpp_headers/json.o 00:03:05.952 CXX test/cpp_headers/jsonrpc.o 00:03:05.952 CXX test/cpp_headers/keyring.o 00:03:05.952 CXX test/cpp_headers/keyring_module.o 00:03:05.952 CXX test/cpp_headers/likely.o 00:03:05.952 CXX test/cpp_headers/log.o 00:03:05.952 CXX test/cpp_headers/lvol.o 00:03:05.952 CXX test/cpp_headers/memory.o 00:03:05.952 CXX test/cpp_headers/mmio.o 00:03:06.210 CXX test/cpp_headers/nbd.o 00:03:06.210 CXX test/cpp_headers/notify.o 00:03:06.210 CXX test/cpp_headers/nvme.o 00:03:06.210 CXX test/cpp_headers/nvme_intel.o 00:03:06.210 CXX test/cpp_headers/nvme_ocssd.o 00:03:06.210 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:06.210 CXX test/cpp_headers/nvme_spec.o 00:03:06.210 CXX test/cpp_headers/nvme_zns.o 00:03:06.210 CXX test/cpp_headers/nvmf_cmd.o 00:03:06.210 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:06.210 CXX test/cpp_headers/nvmf.o 00:03:06.210 CXX test/cpp_headers/nvmf_spec.o 00:03:06.468 CXX test/cpp_headers/nvmf_transport.o 00:03:06.468 CXX test/cpp_headers/opal.o 00:03:06.468 CXX test/cpp_headers/opal_spec.o 00:03:06.468 CXX test/cpp_headers/pci_ids.o 00:03:06.468 CXX test/cpp_headers/pipe.o 00:03:06.468 LINK cuse 00:03:06.468 CXX test/cpp_headers/queue.o 00:03:06.468 CXX test/cpp_headers/reduce.o 00:03:06.468 CXX test/cpp_headers/rpc.o 00:03:06.468 CXX test/cpp_headers/scheduler.o 00:03:06.468 CXX test/cpp_headers/scsi.o 00:03:06.468 CXX test/cpp_headers/scsi_spec.o 00:03:06.468 CXX test/cpp_headers/sock.o 00:03:06.468 CXX test/cpp_headers/stdinc.o 00:03:06.468 CXX test/cpp_headers/string.o 00:03:06.725 CXX test/cpp_headers/thread.o 00:03:06.725 CXX test/cpp_headers/trace.o 00:03:06.725 CXX test/cpp_headers/trace_parser.o 00:03:06.725 CXX test/cpp_headers/tree.o 00:03:06.725 CXX test/cpp_headers/ublk.o 00:03:06.725 CXX test/cpp_headers/util.o 00:03:06.725 CXX test/cpp_headers/uuid.o 00:03:06.725 CXX test/cpp_headers/version.o 00:03:06.725 CXX test/cpp_headers/vfio_user_pci.o 00:03:06.725 CXX test/cpp_headers/vfio_user_spec.o 00:03:06.725 CXX test/cpp_headers/vhost.o 00:03:06.725 CXX test/cpp_headers/vmd.o 00:03:06.983 CXX test/cpp_headers/xor.o 00:03:06.983 CXX test/cpp_headers/zipf.o 00:03:08.355 LINK esnap 00:03:08.920 00:03:08.920 real 1m9.314s 00:03:08.920 user 6m16.557s 00:03:08.920 sys 1m49.424s 00:03:08.920 20:09:38 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:03:08.920 20:09:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:08.920 ************************************ 00:03:08.920 END TEST make 00:03:08.920 ************************************ 00:03:08.920 20:09:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:08.920 20:09:39 -- pm/common@30 -- $ signal_monitor_resources TERM 00:03:08.920 20:09:39 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:03:08.920 20:09:39 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.920 20:09:39 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:08.920 20:09:39 -- pm/common@45 -- $ pid=5182 00:03:08.920 20:09:39 -- pm/common@52 -- $ sudo kill -TERM 5182 00:03:08.920 20:09:39 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:08.920 20:09:39 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:08.920 20:09:39 -- pm/common@45 -- $ pid=5185 00:03:08.920 20:09:39 -- pm/common@52 -- $ sudo kill -TERM 5185 00:03:09.179 20:09:39 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:09.179 20:09:39 -- nvmf/common.sh@7 -- # uname -s 00:03:09.179 20:09:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:09.179 20:09:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:09.179 20:09:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:09.179 20:09:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:09.179 20:09:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:09.179 20:09:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:09.179 20:09:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:09.179 20:09:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:09.179 20:09:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:09.179 20:09:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:09.179 20:09:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9198a36b-a46e-4e0f-a169-b7f1c9873fac 00:03:09.179 20:09:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=9198a36b-a46e-4e0f-a169-b7f1c9873fac 00:03:09.179 20:09:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:09.179 20:09:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:09.179 20:09:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:09.179 20:09:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:09.179 20:09:39 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:09.179 20:09:39 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:09.179 20:09:39 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:09.179 20:09:39 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:09.179 20:09:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.179 20:09:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.179 20:09:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.179 20:09:39 -- paths/export.sh@5 -- # export PATH 00:03:09.179 20:09:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:09.179 20:09:39 -- nvmf/common.sh@47 -- # : 0 00:03:09.179 20:09:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:09.179 20:09:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:09.179 20:09:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:09.179 20:09:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:09.179 20:09:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:09.179 20:09:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:09.179 20:09:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:09.179 20:09:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:09.179 20:09:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:09.179 20:09:39 -- spdk/autotest.sh@32 -- # uname -s 00:03:09.179 20:09:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:09.179 20:09:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:09.179 20:09:39 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:09.179 20:09:39 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:09.179 20:09:39 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:09.179 20:09:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:09.179 20:09:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:09.179 20:09:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:09.179 20:09:39 -- spdk/autotest.sh@48 -- # udevadm_pid=53060 00:03:09.179 20:09:39 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:09.179 20:09:39 -- pm/common@17 -- # local monitor 00:03:09.179 20:09:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.179 20:09:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:09.179 20:09:39 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=53062 00:03:09.179 20:09:39 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:09.179 20:09:39 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=53064 00:03:09.179 20:09:39 -- pm/common@21 -- # date +%s 00:03:09.179 20:09:39 -- pm/common@26 -- # sleep 1 00:03:09.179 20:09:39 -- pm/common@21 -- # date +%s 00:03:09.179 20:09:39 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713989379 00:03:09.179 20:09:39 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713989379 00:03:09.179 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713989379_collect-vmstat.pm.log 00:03:09.179 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713989379_collect-cpu-load.pm.log 00:03:10.116 20:09:40 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:10.116 20:09:40 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:10.116 20:09:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:10.116 20:09:40 -- common/autotest_common.sh@10 -- # set +x 00:03:10.116 20:09:40 -- spdk/autotest.sh@59 -- # create_test_list 00:03:10.116 20:09:40 -- common/autotest_common.sh@734 -- # xtrace_disable 00:03:10.116 20:09:40 -- common/autotest_common.sh@10 -- # set +x 00:03:10.392 20:09:40 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:10.392 20:09:40 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:10.392 20:09:40 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:10.392 20:09:40 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:10.392 20:09:40 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:10.392 20:09:40 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:10.392 20:09:40 -- common/autotest_common.sh@1441 -- # uname 00:03:10.392 20:09:40 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:03:10.392 20:09:40 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:10.392 20:09:40 -- common/autotest_common.sh@1461 -- # uname 00:03:10.392 20:09:40 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:03:10.392 20:09:40 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:10.392 20:09:40 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:10.392 20:09:40 -- spdk/autotest.sh@72 -- # hash lcov 00:03:10.392 20:09:40 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:10.392 20:09:40 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:10.392 --rc lcov_branch_coverage=1 00:03:10.392 --rc lcov_function_coverage=1 00:03:10.392 --rc genhtml_branch_coverage=1 00:03:10.392 --rc genhtml_function_coverage=1 00:03:10.392 --rc genhtml_legend=1 00:03:10.392 --rc geninfo_all_blocks=1 00:03:10.392 ' 00:03:10.392 20:09:40 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:10.392 --rc lcov_branch_coverage=1 00:03:10.392 --rc lcov_function_coverage=1 00:03:10.392 --rc genhtml_branch_coverage=1 00:03:10.392 --rc genhtml_function_coverage=1 00:03:10.392 --rc genhtml_legend=1 00:03:10.392 --rc geninfo_all_blocks=1 00:03:10.392 ' 00:03:10.392 20:09:40 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:10.392 --rc lcov_branch_coverage=1 00:03:10.392 --rc lcov_function_coverage=1 00:03:10.392 --rc genhtml_branch_coverage=1 00:03:10.392 --rc genhtml_function_coverage=1 00:03:10.392 --rc genhtml_legend=1 00:03:10.392 --rc geninfo_all_blocks=1 00:03:10.392 --no-external' 00:03:10.392 20:09:40 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:10.392 --rc lcov_branch_coverage=1 00:03:10.392 --rc lcov_function_coverage=1 00:03:10.392 --rc genhtml_branch_coverage=1 00:03:10.392 --rc genhtml_function_coverage=1 00:03:10.392 --rc genhtml_legend=1 00:03:10.392 --rc geninfo_all_blocks=1 00:03:10.392 --no-external' 00:03:10.392 20:09:40 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:10.392 lcov: LCOV version 1.14 00:03:10.392 20:09:40 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:18.509 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:18.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:18.509 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:18.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:18.509 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:18.509 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:25.165 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:25.165 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:37.448 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:37.448 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:37.449 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:37.449 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:40.016 20:10:09 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:40.016 20:10:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:40.016 20:10:09 -- common/autotest_common.sh@10 -- # set +x 00:03:40.016 20:10:09 -- spdk/autotest.sh@91 -- # rm -f 00:03:40.016 20:10:09 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:40.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:40.865 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:40.865 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:41.124 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:41.124 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:41.124 20:10:11 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:41.124 20:10:11 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:41.124 20:10:11 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:41.124 20:10:11 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:41.124 20:10:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:41.124 20:10:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:41.124 20:10:11 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:41.124 20:10:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:41.124 20:10:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:41.124 20:10:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:41.124 20:10:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:41.124 20:10:11 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:41.124 20:10:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:41.124 20:10:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:41.124 20:10:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:41.124 20:10:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:03:41.124 20:10:11 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:03:41.124 20:10:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:41.124 20:10:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:41.124 20:10:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:41.124 20:10:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:03:41.124 20:10:11 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:03:41.124 20:10:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:41.124 20:10:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:41.124 20:10:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:41.124 20:10:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:03:41.124 20:10:11 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:03:41.124 20:10:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:41.124 20:10:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:41.124 20:10:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:41.124 20:10:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:03:41.124 20:10:11 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:03:41.124 20:10:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:41.124 20:10:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:41.124 20:10:11 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:41.124 20:10:11 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:03:41.124 20:10:11 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:03:41.124 20:10:11 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:41.124 20:10:11 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:41.124 20:10:11 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:41.124 20:10:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.125 20:10:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.125 20:10:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:41.125 20:10:11 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:41.125 20:10:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:41.125 No valid GPT data, bailing 00:03:41.125 20:10:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:41.125 20:10:11 -- scripts/common.sh@391 -- # pt= 00:03:41.125 20:10:11 -- scripts/common.sh@392 -- # return 1 00:03:41.125 20:10:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:41.125 1+0 records in 00:03:41.125 1+0 records out 00:03:41.125 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185347 s, 56.6 MB/s 00:03:41.125 20:10:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.125 20:10:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.125 20:10:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:41.125 20:10:11 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:41.125 20:10:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:41.125 No valid GPT data, bailing 00:03:41.125 20:10:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:41.384 20:10:11 -- scripts/common.sh@391 -- # pt= 00:03:41.384 20:10:11 -- scripts/common.sh@392 -- # return 1 00:03:41.384 20:10:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:41.384 1+0 records in 00:03:41.384 1+0 records out 00:03:41.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00354144 s, 296 MB/s 00:03:41.384 20:10:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.384 20:10:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.384 20:10:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:03:41.384 20:10:11 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:03:41.384 20:10:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:41.384 No valid GPT data, bailing 00:03:41.384 20:10:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:41.384 20:10:11 -- scripts/common.sh@391 -- # pt= 00:03:41.384 20:10:11 -- scripts/common.sh@392 -- # return 1 00:03:41.384 20:10:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:41.384 1+0 records in 00:03:41.384 1+0 records out 00:03:41.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00399803 s, 262 MB/s 00:03:41.384 20:10:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.384 20:10:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.384 20:10:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n2 00:03:41.384 20:10:11 -- scripts/common.sh@378 -- # local block=/dev/nvme2n2 pt 00:03:41.384 20:10:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:41.384 No valid GPT data, bailing 00:03:41.384 20:10:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:41.384 20:10:11 -- scripts/common.sh@391 -- # pt= 00:03:41.384 20:10:11 -- scripts/common.sh@392 -- # return 1 00:03:41.384 20:10:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:41.384 1+0 records in 00:03:41.384 1+0 records out 00:03:41.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00562834 s, 186 MB/s 00:03:41.384 20:10:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.384 20:10:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.384 20:10:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n3 00:03:41.384 20:10:11 -- scripts/common.sh@378 -- # local block=/dev/nvme2n3 pt 00:03:41.384 20:10:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:41.384 No valid GPT data, bailing 00:03:41.384 20:10:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:41.384 20:10:11 -- scripts/common.sh@391 -- # pt= 00:03:41.384 20:10:11 -- scripts/common.sh@392 -- # return 1 00:03:41.384 20:10:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:41.644 1+0 records in 00:03:41.644 1+0 records out 00:03:41.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00521646 s, 201 MB/s 00:03:41.644 20:10:11 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:41.644 20:10:11 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:41.644 20:10:11 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:03:41.644 20:10:11 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:03:41.644 20:10:11 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:41.644 No valid GPT data, bailing 00:03:41.644 20:10:11 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:41.644 20:10:11 -- scripts/common.sh@391 -- # pt= 00:03:41.644 20:10:11 -- scripts/common.sh@392 -- # return 1 00:03:41.644 20:10:11 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:41.644 1+0 records in 00:03:41.644 1+0 records out 00:03:41.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00680195 s, 154 MB/s 00:03:41.644 20:10:11 -- spdk/autotest.sh@118 -- # sync 00:03:41.644 20:10:11 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:41.644 20:10:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:41.644 20:10:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:44.943 20:10:14 -- spdk/autotest.sh@124 -- # uname -s 00:03:44.943 20:10:14 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:44.943 20:10:14 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:44.943 20:10:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:44.943 20:10:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:44.943 20:10:14 -- common/autotest_common.sh@10 -- # set +x 00:03:44.943 ************************************ 00:03:44.943 START TEST setup.sh 00:03:44.943 ************************************ 00:03:44.943 20:10:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:44.943 * Looking for test storage... 00:03:44.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:44.943 20:10:14 -- setup/test-setup.sh@10 -- # uname -s 00:03:44.943 20:10:14 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:44.943 20:10:14 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:44.943 20:10:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:44.943 20:10:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:44.943 20:10:14 -- common/autotest_common.sh@10 -- # set +x 00:03:44.943 ************************************ 00:03:44.943 START TEST acl 00:03:44.943 ************************************ 00:03:44.943 20:10:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:44.943 * Looking for test storage... 00:03:44.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:44.943 20:10:14 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:44.943 20:10:14 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:44.943 20:10:14 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:44.943 20:10:14 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:44.943 20:10:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:44.943 20:10:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:44.943 20:10:14 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:44.943 20:10:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:44.943 20:10:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:44.943 20:10:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:44.943 20:10:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:44.943 20:10:14 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:44.943 20:10:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:44.943 20:10:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:44.943 20:10:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:44.943 20:10:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:03:44.943 20:10:14 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:03:44.943 20:10:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:44.943 20:10:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:44.943 20:10:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:44.943 20:10:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:03:44.943 20:10:14 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:03:44.943 20:10:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:44.943 20:10:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:44.943 20:10:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:44.943 20:10:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:03:44.943 20:10:14 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:03:44.943 20:10:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:44.943 20:10:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:44.943 20:10:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:44.943 20:10:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:03:44.943 20:10:14 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:03:44.943 20:10:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:44.943 20:10:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:44.943 20:10:14 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:44.943 20:10:14 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:03:44.943 20:10:14 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:03:44.943 20:10:14 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:44.943 20:10:14 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:44.943 20:10:14 -- setup/acl.sh@12 -- # devs=() 00:03:44.943 20:10:14 -- setup/acl.sh@12 -- # declare -a devs 00:03:44.943 20:10:14 -- setup/acl.sh@13 -- # drivers=() 00:03:44.943 20:10:14 -- setup/acl.sh@13 -- # declare -A drivers 00:03:44.943 20:10:14 -- setup/acl.sh@51 -- # setup reset 00:03:44.943 20:10:14 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.943 20:10:14 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:46.337 20:10:16 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:46.338 20:10:16 -- setup/acl.sh@16 -- # local dev driver 00:03:46.338 20:10:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.338 20:10:16 -- setup/acl.sh@15 -- # setup output status 00:03:46.338 20:10:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.338 20:10:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:46.922 20:10:17 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:46.922 20:10:17 -- setup/acl.sh@19 -- # continue 00:03:46.922 20:10:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.515 Hugepages 00:03:47.515 node hugesize free / total 00:03:47.515 20:10:17 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:47.515 20:10:17 -- setup/acl.sh@19 -- # continue 00:03:47.515 20:10:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.515 00:03:47.515 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:47.515 20:10:17 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:47.515 20:10:17 -- setup/acl.sh@19 -- # continue 00:03:47.515 20:10:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.789 20:10:17 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:47.789 20:10:17 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:47.789 20:10:17 -- setup/acl.sh@20 -- # continue 00:03:47.789 20:10:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.789 20:10:17 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:47.789 20:10:17 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:47.789 20:10:17 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:47.789 20:10:17 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:47.789 20:10:17 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:47.789 20:10:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.789 20:10:17 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:47.789 20:10:17 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:47.789 20:10:17 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:47.789 20:10:17 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:47.789 20:10:17 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:47.789 20:10:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.066 20:10:18 -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:03:48.066 20:10:18 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:48.066 20:10:18 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:03:48.066 20:10:18 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:48.066 20:10:18 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:48.066 20:10:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.066 20:10:18 -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:03:48.066 20:10:18 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:48.066 20:10:18 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:03:48.066 20:10:18 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:48.066 20:10:18 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:48.066 20:10:18 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.066 20:10:18 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:48.066 20:10:18 -- setup/acl.sh@54 -- # run_test denied denied 00:03:48.066 20:10:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:48.066 20:10:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:48.066 20:10:18 -- common/autotest_common.sh@10 -- # set +x 00:03:48.066 ************************************ 00:03:48.066 START TEST denied 00:03:48.066 ************************************ 00:03:48.066 20:10:18 -- common/autotest_common.sh@1111 -- # denied 00:03:48.066 20:10:18 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:48.066 20:10:18 -- setup/acl.sh@38 -- # setup output config 00:03:48.066 20:10:18 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:48.066 20:10:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.066 20:10:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:50.025 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:50.025 20:10:19 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:50.025 20:10:19 -- setup/acl.sh@28 -- # local dev driver 00:03:50.025 20:10:19 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:50.025 20:10:19 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:50.025 20:10:19 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:50.025 20:10:19 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:50.025 20:10:19 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:50.025 20:10:19 -- setup/acl.sh@41 -- # setup reset 00:03:50.025 20:10:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.025 20:10:19 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:56.691 00:03:56.691 real 0m7.752s 00:03:56.691 user 0m0.990s 00:03:56.691 sys 0m1.889s 00:03:56.691 20:10:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:56.691 ************************************ 00:03:56.691 END TEST denied 00:03:56.691 ************************************ 00:03:56.691 20:10:26 -- common/autotest_common.sh@10 -- # set +x 00:03:56.691 20:10:26 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:56.691 20:10:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.691 20:10:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.691 20:10:26 -- common/autotest_common.sh@10 -- # set +x 00:03:56.691 ************************************ 00:03:56.691 START TEST allowed 00:03:56.691 ************************************ 00:03:56.691 20:10:26 -- common/autotest_common.sh@1111 -- # allowed 00:03:56.691 20:10:26 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:56.691 20:10:26 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:56.691 20:10:26 -- setup/acl.sh@45 -- # setup output config 00:03:56.691 20:10:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.691 20:10:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:57.628 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:57.628 20:10:27 -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:03:57.628 20:10:27 -- setup/acl.sh@28 -- # local dev driver 00:03:57.628 20:10:27 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:57.628 20:10:27 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:57.628 20:10:27 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:57.628 20:10:27 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:57.628 20:10:27 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:57.628 20:10:27 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:57.628 20:10:27 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:03:57.628 20:10:27 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:03:57.628 20:10:27 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:57.628 20:10:27 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:57.628 20:10:27 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:57.628 20:10:27 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:03:57.628 20:10:27 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:03:57.628 20:10:27 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:57.628 20:10:27 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:57.628 20:10:27 -- setup/acl.sh@48 -- # setup reset 00:03:57.628 20:10:27 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:57.628 20:10:27 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.007 00:03:59.007 real 0m2.740s 00:03:59.007 user 0m1.090s 00:03:59.007 sys 0m1.666s 00:03:59.007 20:10:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:59.007 ************************************ 00:03:59.007 END TEST allowed 00:03:59.007 ************************************ 00:03:59.007 20:10:28 -- common/autotest_common.sh@10 -- # set +x 00:03:59.007 ************************************ 00:03:59.007 END TEST acl 00:03:59.007 ************************************ 00:03:59.007 00:03:59.007 real 0m14.132s 00:03:59.007 user 0m3.589s 00:03:59.007 sys 0m5.672s 00:03:59.007 20:10:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:59.007 20:10:28 -- common/autotest_common.sh@10 -- # set +x 00:03:59.007 20:10:29 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:59.007 20:10:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.007 20:10:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.007 20:10:29 -- common/autotest_common.sh@10 -- # set +x 00:03:59.007 ************************************ 00:03:59.007 START TEST hugepages 00:03:59.007 ************************************ 00:03:59.007 20:10:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:59.267 * Looking for test storage... 00:03:59.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:59.267 20:10:29 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:59.267 20:10:29 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:59.267 20:10:29 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:59.267 20:10:29 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:59.268 20:10:29 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:59.268 20:10:29 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:59.268 20:10:29 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:59.268 20:10:29 -- setup/common.sh@18 -- # local node= 00:03:59.268 20:10:29 -- setup/common.sh@19 -- # local var val 00:03:59.268 20:10:29 -- setup/common.sh@20 -- # local mem_f mem 00:03:59.268 20:10:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.268 20:10:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.268 20:10:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.268 20:10:29 -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.268 20:10:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 5364368 kB' 'MemAvailable: 7361272 kB' 'Buffers: 2436 kB' 'Cached: 2208544 kB' 'SwapCached: 0 kB' 'Active: 846964 kB' 'Inactive: 1474060 kB' 'Active(anon): 120556 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474060 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 868 kB' 'Writeback: 0 kB' 'AnonPages: 111668 kB' 'Mapped: 48928 kB' 'Shmem: 10512 kB' 'KReclaimable: 66748 kB' 'Slab: 144364 kB' 'SReclaimable: 66748 kB' 'SUnreclaim: 77616 kB' 'KernelStack: 6412 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 338332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.268 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.268 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # continue 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # IFS=': ' 00:03:59.269 20:10:29 -- setup/common.sh@31 -- # read -r var val _ 00:03:59.269 20:10:29 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:59.269 20:10:29 -- setup/common.sh@33 -- # echo 2048 00:03:59.269 20:10:29 -- setup/common.sh@33 -- # return 0 00:03:59.269 20:10:29 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:59.269 20:10:29 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:59.269 20:10:29 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:59.269 20:10:29 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:59.269 20:10:29 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:59.269 20:10:29 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:59.269 20:10:29 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:59.269 20:10:29 -- setup/hugepages.sh@207 -- # get_nodes 00:03:59.269 20:10:29 -- setup/hugepages.sh@27 -- # local node 00:03:59.269 20:10:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.269 20:10:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:59.269 20:10:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:59.269 20:10:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.269 20:10:29 -- setup/hugepages.sh@208 -- # clear_hp 00:03:59.269 20:10:29 -- setup/hugepages.sh@37 -- # local node hp 00:03:59.269 20:10:29 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:59.269 20:10:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.269 20:10:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.269 20:10:29 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:59.269 20:10:29 -- setup/hugepages.sh@41 -- # echo 0 00:03:59.269 20:10:29 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:59.269 20:10:29 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:59.269 20:10:29 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:59.269 20:10:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.269 20:10:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.269 20:10:29 -- common/autotest_common.sh@10 -- # set +x 00:03:59.269 ************************************ 00:03:59.269 START TEST default_setup 00:03:59.269 ************************************ 00:03:59.269 20:10:29 -- common/autotest_common.sh@1111 -- # default_setup 00:03:59.269 20:10:29 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:59.269 20:10:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.269 20:10:29 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.269 20:10:29 -- setup/hugepages.sh@51 -- # shift 00:03:59.269 20:10:29 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.269 20:10:29 -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.269 20:10:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.269 20:10:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.269 20:10:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.269 20:10:29 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.269 20:10:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.269 20:10:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.269 20:10:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:59.269 20:10:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.269 20:10:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.269 20:10:29 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.269 20:10:29 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.269 20:10:29 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:59.269 20:10:29 -- setup/hugepages.sh@73 -- # return 0 00:03:59.269 20:10:29 -- setup/hugepages.sh@137 -- # setup output 00:03:59.269 20:10:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.269 20:10:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:00.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.773 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.773 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.773 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.036 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.036 20:10:31 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:01.036 20:10:31 -- setup/hugepages.sh@89 -- # local node 00:04:01.036 20:10:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.036 20:10:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.036 20:10:31 -- setup/hugepages.sh@92 -- # local surp 00:04:01.036 20:10:31 -- setup/hugepages.sh@93 -- # local resv 00:04:01.036 20:10:31 -- setup/hugepages.sh@94 -- # local anon 00:04:01.036 20:10:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.036 20:10:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.036 20:10:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.036 20:10:31 -- setup/common.sh@18 -- # local node= 00:04:01.036 20:10:31 -- setup/common.sh@19 -- # local var val 00:04:01.036 20:10:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.036 20:10:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.036 20:10:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.036 20:10:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.036 20:10:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.036 20:10:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.036 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.036 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7482924 kB' 'MemAvailable: 9479584 kB' 'Buffers: 2436 kB' 'Cached: 2208536 kB' 'SwapCached: 0 kB' 'Active: 860144 kB' 'Inactive: 1474080 kB' 'Active(anon): 133736 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1028 kB' 'Writeback: 0 kB' 'AnonPages: 124828 kB' 'Mapped: 48996 kB' 'Shmem: 10472 kB' 'KReclaimable: 66220 kB' 'Slab: 143456 kB' 'SReclaimable: 66220 kB' 'SUnreclaim: 77236 kB' 'KernelStack: 6448 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.037 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.037 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.038 20:10:31 -- setup/common.sh@33 -- # echo 0 00:04:01.038 20:10:31 -- setup/common.sh@33 -- # return 0 00:04:01.038 20:10:31 -- setup/hugepages.sh@97 -- # anon=0 00:04:01.038 20:10:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.038 20:10:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.038 20:10:31 -- setup/common.sh@18 -- # local node= 00:04:01.038 20:10:31 -- setup/common.sh@19 -- # local var val 00:04:01.038 20:10:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.038 20:10:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.038 20:10:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.038 20:10:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.038 20:10:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.038 20:10:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7482672 kB' 'MemAvailable: 9479332 kB' 'Buffers: 2436 kB' 'Cached: 2208536 kB' 'SwapCached: 0 kB' 'Active: 859728 kB' 'Inactive: 1474080 kB' 'Active(anon): 133320 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1028 kB' 'Writeback: 0 kB' 'AnonPages: 124476 kB' 'Mapped: 48880 kB' 'Shmem: 10472 kB' 'KReclaimable: 66220 kB' 'Slab: 143456 kB' 'SReclaimable: 66220 kB' 'SUnreclaim: 77236 kB' 'KernelStack: 6464 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.038 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.038 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.039 20:10:31 -- setup/common.sh@33 -- # echo 0 00:04:01.039 20:10:31 -- setup/common.sh@33 -- # return 0 00:04:01.039 20:10:31 -- setup/hugepages.sh@99 -- # surp=0 00:04:01.039 20:10:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.039 20:10:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.039 20:10:31 -- setup/common.sh@18 -- # local node= 00:04:01.039 20:10:31 -- setup/common.sh@19 -- # local var val 00:04:01.039 20:10:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.039 20:10:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.039 20:10:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.039 20:10:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.039 20:10:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.039 20:10:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7482688 kB' 'MemAvailable: 9479348 kB' 'Buffers: 2436 kB' 'Cached: 2208536 kB' 'SwapCached: 0 kB' 'Active: 859768 kB' 'Inactive: 1474080 kB' 'Active(anon): 133360 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1028 kB' 'Writeback: 0 kB' 'AnonPages: 124508 kB' 'Mapped: 48880 kB' 'Shmem: 10472 kB' 'KReclaimable: 66220 kB' 'Slab: 143456 kB' 'SReclaimable: 66220 kB' 'SUnreclaim: 77236 kB' 'KernelStack: 6480 kB' 'PageTables: 4564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.039 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.039 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.040 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.040 20:10:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.040 20:10:31 -- setup/common.sh@33 -- # echo 0 00:04:01.040 20:10:31 -- setup/common.sh@33 -- # return 0 00:04:01.040 nr_hugepages=1024 00:04:01.040 resv_hugepages=0 00:04:01.040 20:10:31 -- setup/hugepages.sh@100 -- # resv=0 00:04:01.040 20:10:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.040 20:10:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.040 surplus_hugepages=0 00:04:01.040 anon_hugepages=0 00:04:01.041 20:10:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.041 20:10:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.041 20:10:31 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.041 20:10:31 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.041 20:10:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.041 20:10:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.041 20:10:31 -- setup/common.sh@18 -- # local node= 00:04:01.041 20:10:31 -- setup/common.sh@19 -- # local var val 00:04:01.041 20:10:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.041 20:10:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.041 20:10:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.041 20:10:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.041 20:10:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.041 20:10:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7482184 kB' 'MemAvailable: 9478844 kB' 'Buffers: 2436 kB' 'Cached: 2208536 kB' 'SwapCached: 0 kB' 'Active: 859984 kB' 'Inactive: 1474080 kB' 'Active(anon): 133576 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1028 kB' 'Writeback: 0 kB' 'AnonPages: 124724 kB' 'Mapped: 48880 kB' 'Shmem: 10472 kB' 'KReclaimable: 66220 kB' 'Slab: 143456 kB' 'SReclaimable: 66220 kB' 'SUnreclaim: 77236 kB' 'KernelStack: 6464 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55140 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.041 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.041 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.042 20:10:31 -- setup/common.sh@33 -- # echo 1024 00:04:01.042 20:10:31 -- setup/common.sh@33 -- # return 0 00:04:01.042 20:10:31 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.042 20:10:31 -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.042 20:10:31 -- setup/hugepages.sh@27 -- # local node 00:04:01.042 20:10:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.042 20:10:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.042 20:10:31 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:01.042 20:10:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.042 20:10:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.042 20:10:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.042 20:10:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.042 20:10:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.042 20:10:31 -- setup/common.sh@18 -- # local node=0 00:04:01.042 20:10:31 -- setup/common.sh@19 -- # local var val 00:04:01.042 20:10:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.042 20:10:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.042 20:10:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.042 20:10:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.042 20:10:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.042 20:10:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7482780 kB' 'MemUsed: 4759192 kB' 'SwapCached: 0 kB' 'Active: 859928 kB' 'Inactive: 1474080 kB' 'Active(anon): 133520 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1028 kB' 'Writeback: 0 kB' 'FilePages: 2210972 kB' 'Mapped: 48880 kB' 'AnonPages: 124620 kB' 'Shmem: 10472 kB' 'KernelStack: 6448 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66220 kB' 'Slab: 143456 kB' 'SReclaimable: 66220 kB' 'SUnreclaim: 77236 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.042 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.042 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # continue 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.043 20:10:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.043 20:10:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.043 20:10:31 -- setup/common.sh@33 -- # echo 0 00:04:01.043 20:10:31 -- setup/common.sh@33 -- # return 0 00:04:01.043 node0=1024 expecting 1024 00:04:01.043 ************************************ 00:04:01.043 END TEST default_setup 00:04:01.043 ************************************ 00:04:01.043 20:10:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.043 20:10:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.043 20:10:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.043 20:10:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.043 20:10:31 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:01.043 20:10:31 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:01.043 00:04:01.043 real 0m1.798s 00:04:01.043 user 0m0.658s 00:04:01.043 sys 0m1.130s 00:04:01.043 20:10:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:01.043 20:10:31 -- common/autotest_common.sh@10 -- # set +x 00:04:01.338 20:10:31 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:01.338 20:10:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:01.338 20:10:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:01.338 20:10:31 -- common/autotest_common.sh@10 -- # set +x 00:04:01.338 ************************************ 00:04:01.338 START TEST per_node_1G_alloc 00:04:01.338 ************************************ 00:04:01.338 20:10:31 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:04:01.338 20:10:31 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:01.338 20:10:31 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:01.338 20:10:31 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:01.338 20:10:31 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.338 20:10:31 -- setup/hugepages.sh@51 -- # shift 00:04:01.338 20:10:31 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.339 20:10:31 -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.339 20:10:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.339 20:10:31 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:01.339 20:10:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.339 20:10:31 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.339 20:10:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.339 20:10:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:01.339 20:10:31 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:01.339 20:10:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.339 20:10:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.339 20:10:31 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.339 20:10:31 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.339 20:10:31 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:01.339 20:10:31 -- setup/hugepages.sh@73 -- # return 0 00:04:01.339 20:10:31 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:01.339 20:10:31 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:01.339 20:10:31 -- setup/hugepages.sh@146 -- # setup output 00:04:01.339 20:10:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.339 20:10:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.938 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.938 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.938 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.938 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:01.938 20:10:32 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:01.938 20:10:32 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:01.938 20:10:32 -- setup/hugepages.sh@89 -- # local node 00:04:01.938 20:10:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.938 20:10:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.938 20:10:32 -- setup/hugepages.sh@92 -- # local surp 00:04:01.938 20:10:32 -- setup/hugepages.sh@93 -- # local resv 00:04:01.938 20:10:32 -- setup/hugepages.sh@94 -- # local anon 00:04:01.938 20:10:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.938 20:10:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.938 20:10:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.938 20:10:32 -- setup/common.sh@18 -- # local node= 00:04:01.938 20:10:32 -- setup/common.sh@19 -- # local var val 00:04:01.938 20:10:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:01.938 20:10:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.938 20:10:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.938 20:10:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.938 20:10:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.938 20:10:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.938 20:10:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8527660 kB' 'MemAvailable: 10524332 kB' 'Buffers: 2436 kB' 'Cached: 2208536 kB' 'SwapCached: 0 kB' 'Active: 859984 kB' 'Inactive: 1474092 kB' 'Active(anon): 133576 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474092 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 124936 kB' 'Mapped: 49020 kB' 'Shmem: 10472 kB' 'KReclaimable: 66220 kB' 'Slab: 143440 kB' 'SReclaimable: 66220 kB' 'SUnreclaim: 77220 kB' 'KernelStack: 6480 kB' 'PageTables: 4552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # continue 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # continue 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # continue 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # continue 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # continue 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # continue 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # continue 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # continue 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # continue 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # continue 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # continue 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.938 20:10:32 -- setup/common.sh@32 -- # continue 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:01.938 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.201 20:10:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.201 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.201 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.201 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.201 20:10:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.201 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.201 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.201 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.201 20:10:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.201 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.201 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.201 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.201 20:10:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.201 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.201 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.201 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.201 20:10:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.201 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.201 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.201 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.201 20:10:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.202 20:10:32 -- setup/common.sh@33 -- # echo 0 00:04:02.202 20:10:32 -- setup/common.sh@33 -- # return 0 00:04:02.202 20:10:32 -- setup/hugepages.sh@97 -- # anon=0 00:04:02.202 20:10:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.202 20:10:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.202 20:10:32 -- setup/common.sh@18 -- # local node= 00:04:02.202 20:10:32 -- setup/common.sh@19 -- # local var val 00:04:02.202 20:10:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.202 20:10:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.202 20:10:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.202 20:10:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.202 20:10:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.202 20:10:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.202 20:10:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8527660 kB' 'MemAvailable: 10524332 kB' 'Buffers: 2436 kB' 'Cached: 2208536 kB' 'SwapCached: 0 kB' 'Active: 859968 kB' 'Inactive: 1474092 kB' 'Active(anon): 133560 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474092 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 124692 kB' 'Mapped: 49020 kB' 'Shmem: 10472 kB' 'KReclaimable: 66220 kB' 'Slab: 143440 kB' 'SReclaimable: 66220 kB' 'SUnreclaim: 77220 kB' 'KernelStack: 6480 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.202 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.202 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.203 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.203 20:10:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.204 20:10:32 -- setup/common.sh@33 -- # echo 0 00:04:02.204 20:10:32 -- setup/common.sh@33 -- # return 0 00:04:02.204 20:10:32 -- setup/hugepages.sh@99 -- # surp=0 00:04:02.204 20:10:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.204 20:10:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.204 20:10:32 -- setup/common.sh@18 -- # local node= 00:04:02.204 20:10:32 -- setup/common.sh@19 -- # local var val 00:04:02.204 20:10:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.204 20:10:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.204 20:10:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.204 20:10:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.204 20:10:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.204 20:10:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8527408 kB' 'MemAvailable: 10524080 kB' 'Buffers: 2436 kB' 'Cached: 2208536 kB' 'SwapCached: 0 kB' 'Active: 859848 kB' 'Inactive: 1474092 kB' 'Active(anon): 133440 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474092 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 124540 kB' 'Mapped: 48896 kB' 'Shmem: 10472 kB' 'KReclaimable: 66220 kB' 'Slab: 143440 kB' 'SReclaimable: 66220 kB' 'SUnreclaim: 77220 kB' 'KernelStack: 6448 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.204 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.204 20:10:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.205 20:10:32 -- setup/common.sh@33 -- # echo 0 00:04:02.205 20:10:32 -- setup/common.sh@33 -- # return 0 00:04:02.205 20:10:32 -- setup/hugepages.sh@100 -- # resv=0 00:04:02.205 nr_hugepages=512 00:04:02.205 resv_hugepages=0 00:04:02.205 surplus_hugepages=0 00:04:02.205 anon_hugepages=0 00:04:02.205 20:10:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:02.205 20:10:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.205 20:10:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.205 20:10:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.205 20:10:32 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:02.205 20:10:32 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:02.205 20:10:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.205 20:10:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.205 20:10:32 -- setup/common.sh@18 -- # local node= 00:04:02.205 20:10:32 -- setup/common.sh@19 -- # local var val 00:04:02.205 20:10:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.205 20:10:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.205 20:10:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.205 20:10:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.205 20:10:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.205 20:10:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8527408 kB' 'MemAvailable: 10524080 kB' 'Buffers: 2436 kB' 'Cached: 2208536 kB' 'SwapCached: 0 kB' 'Active: 859856 kB' 'Inactive: 1474092 kB' 'Active(anon): 133448 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474092 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 124808 kB' 'Mapped: 48896 kB' 'Shmem: 10472 kB' 'KReclaimable: 66220 kB' 'Slab: 143440 kB' 'SReclaimable: 66220 kB' 'SUnreclaim: 77220 kB' 'KernelStack: 6448 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.205 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.205 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.206 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.206 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.207 20:10:32 -- setup/common.sh@33 -- # echo 512 00:04:02.207 20:10:32 -- setup/common.sh@33 -- # return 0 00:04:02.207 20:10:32 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:02.207 20:10:32 -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.207 20:10:32 -- setup/hugepages.sh@27 -- # local node 00:04:02.207 20:10:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.207 20:10:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:02.207 20:10:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:02.207 20:10:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.207 20:10:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.207 20:10:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.207 20:10:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.207 20:10:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.207 20:10:32 -- setup/common.sh@18 -- # local node=0 00:04:02.207 20:10:32 -- setup/common.sh@19 -- # local var val 00:04:02.207 20:10:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.207 20:10:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.207 20:10:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.207 20:10:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.207 20:10:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.207 20:10:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.207 20:10:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8527408 kB' 'MemUsed: 3714564 kB' 'SwapCached: 0 kB' 'Active: 859964 kB' 'Inactive: 1474092 kB' 'Active(anon): 133556 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474092 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2210972 kB' 'Mapped: 48896 kB' 'AnonPages: 124652 kB' 'Shmem: 10472 kB' 'KernelStack: 6448 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66220 kB' 'Slab: 143440 kB' 'SReclaimable: 66220 kB' 'SUnreclaim: 77220 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.207 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.207 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.208 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 20:10:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.208 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.208 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 20:10:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.208 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.208 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 20:10:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.208 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.208 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 20:10:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.208 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.208 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 20:10:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.208 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.208 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 20:10:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.208 20:10:32 -- setup/common.sh@32 -- # continue 00:04:02.208 20:10:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.208 20:10:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.208 20:10:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.208 20:10:32 -- setup/common.sh@33 -- # echo 0 00:04:02.208 20:10:32 -- setup/common.sh@33 -- # return 0 00:04:02.208 20:10:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.208 20:10:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.208 20:10:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.208 20:10:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.208 node0=512 expecting 512 00:04:02.208 ************************************ 00:04:02.208 END TEST per_node_1G_alloc 00:04:02.208 ************************************ 00:04:02.208 20:10:32 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:02.208 20:10:32 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:02.208 00:04:02.208 real 0m0.961s 00:04:02.208 user 0m0.417s 00:04:02.208 sys 0m0.588s 00:04:02.208 20:10:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:02.208 20:10:32 -- common/autotest_common.sh@10 -- # set +x 00:04:02.208 20:10:32 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:02.208 20:10:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.208 20:10:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.208 20:10:32 -- common/autotest_common.sh@10 -- # set +x 00:04:02.467 ************************************ 00:04:02.467 START TEST even_2G_alloc 00:04:02.467 ************************************ 00:04:02.467 20:10:32 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:04:02.467 20:10:32 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:02.467 20:10:32 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:02.467 20:10:32 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:02.467 20:10:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:02.467 20:10:32 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:02.467 20:10:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:02.467 20:10:32 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:02.467 20:10:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:02.467 20:10:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:02.467 20:10:32 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:02.468 20:10:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:02.468 20:10:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:02.468 20:10:32 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:02.468 20:10:32 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:02.468 20:10:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.468 20:10:32 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:02.468 20:10:32 -- setup/hugepages.sh@83 -- # : 0 00:04:02.468 20:10:32 -- setup/hugepages.sh@84 -- # : 0 00:04:02.468 20:10:32 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:02.468 20:10:32 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:02.468 20:10:32 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:02.468 20:10:32 -- setup/hugepages.sh@153 -- # setup output 00:04:02.468 20:10:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.468 20:10:32 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:02.726 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:02.986 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.986 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.986 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.986 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:02.986 20:10:33 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:02.986 20:10:33 -- setup/hugepages.sh@89 -- # local node 00:04:02.986 20:10:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.986 20:10:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.986 20:10:33 -- setup/hugepages.sh@92 -- # local surp 00:04:02.986 20:10:33 -- setup/hugepages.sh@93 -- # local resv 00:04:02.986 20:10:33 -- setup/hugepages.sh@94 -- # local anon 00:04:02.986 20:10:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.986 20:10:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.986 20:10:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.986 20:10:33 -- setup/common.sh@18 -- # local node= 00:04:02.986 20:10:33 -- setup/common.sh@19 -- # local var val 00:04:02.986 20:10:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:02.986 20:10:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.986 20:10:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.986 20:10:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.986 20:10:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.986 20:10:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 20:10:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7485808 kB' 'MemAvailable: 9482512 kB' 'Buffers: 2436 kB' 'Cached: 2208572 kB' 'SwapCached: 0 kB' 'Active: 860100 kB' 'Inactive: 1474128 kB' 'Active(anon): 133692 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474128 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 124796 kB' 'Mapped: 49040 kB' 'Shmem: 10472 kB' 'KReclaimable: 66216 kB' 'Slab: 143440 kB' 'SReclaimable: 66216 kB' 'SUnreclaim: 77224 kB' 'KernelStack: 6420 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55220 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # continue 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # continue 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # continue 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # continue 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # continue 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # continue 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # continue 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # continue 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # continue 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.986 20:10:33 -- setup/common.sh@32 -- # continue 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:02.986 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.249 20:10:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.249 20:10:33 -- setup/common.sh@33 -- # echo 0 00:04:03.249 20:10:33 -- setup/common.sh@33 -- # return 0 00:04:03.249 20:10:33 -- setup/hugepages.sh@97 -- # anon=0 00:04:03.249 20:10:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.249 20:10:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.249 20:10:33 -- setup/common.sh@18 -- # local node= 00:04:03.249 20:10:33 -- setup/common.sh@19 -- # local var val 00:04:03.249 20:10:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.249 20:10:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.249 20:10:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.249 20:10:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.249 20:10:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.249 20:10:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.249 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7485556 kB' 'MemAvailable: 9482260 kB' 'Buffers: 2436 kB' 'Cached: 2208572 kB' 'SwapCached: 0 kB' 'Active: 859980 kB' 'Inactive: 1474128 kB' 'Active(anon): 133572 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474128 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 124676 kB' 'Mapped: 48904 kB' 'Shmem: 10472 kB' 'KReclaimable: 66216 kB' 'Slab: 143500 kB' 'SReclaimable: 66216 kB' 'SUnreclaim: 77284 kB' 'KernelStack: 6448 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.250 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.250 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.251 20:10:33 -- setup/common.sh@33 -- # echo 0 00:04:03.251 20:10:33 -- setup/common.sh@33 -- # return 0 00:04:03.251 20:10:33 -- setup/hugepages.sh@99 -- # surp=0 00:04:03.251 20:10:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.251 20:10:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.251 20:10:33 -- setup/common.sh@18 -- # local node= 00:04:03.251 20:10:33 -- setup/common.sh@19 -- # local var val 00:04:03.251 20:10:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.251 20:10:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.251 20:10:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.251 20:10:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.251 20:10:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.251 20:10:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7485304 kB' 'MemAvailable: 9482008 kB' 'Buffers: 2436 kB' 'Cached: 2208572 kB' 'SwapCached: 0 kB' 'Active: 859996 kB' 'Inactive: 1474128 kB' 'Active(anon): 133588 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474128 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 124684 kB' 'Mapped: 48904 kB' 'Shmem: 10472 kB' 'KReclaimable: 66216 kB' 'Slab: 143500 kB' 'SReclaimable: 66216 kB' 'SUnreclaim: 77284 kB' 'KernelStack: 6448 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.251 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.251 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.252 20:10:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.252 20:10:33 -- setup/common.sh@33 -- # echo 0 00:04:03.252 20:10:33 -- setup/common.sh@33 -- # return 0 00:04:03.252 20:10:33 -- setup/hugepages.sh@100 -- # resv=0 00:04:03.252 nr_hugepages=1024 00:04:03.252 20:10:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:03.252 resv_hugepages=0 00:04:03.252 surplus_hugepages=0 00:04:03.252 anon_hugepages=0 00:04:03.252 20:10:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.252 20:10:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.252 20:10:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.252 20:10:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.252 20:10:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:03.252 20:10:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.252 20:10:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.252 20:10:33 -- setup/common.sh@18 -- # local node= 00:04:03.252 20:10:33 -- setup/common.sh@19 -- # local var val 00:04:03.252 20:10:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.252 20:10:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.252 20:10:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.252 20:10:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.252 20:10:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.252 20:10:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.252 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7485304 kB' 'MemAvailable: 9482008 kB' 'Buffers: 2436 kB' 'Cached: 2208572 kB' 'SwapCached: 0 kB' 'Active: 859984 kB' 'Inactive: 1474128 kB' 'Active(anon): 133576 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474128 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'AnonPages: 124676 kB' 'Mapped: 48904 kB' 'Shmem: 10472 kB' 'KReclaimable: 66216 kB' 'Slab: 143500 kB' 'SReclaimable: 66216 kB' 'SUnreclaim: 77284 kB' 'KernelStack: 6448 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.253 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.253 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.254 20:10:33 -- setup/common.sh@33 -- # echo 1024 00:04:03.254 20:10:33 -- setup/common.sh@33 -- # return 0 00:04:03.254 20:10:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:03.254 20:10:33 -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.254 20:10:33 -- setup/hugepages.sh@27 -- # local node 00:04:03.254 20:10:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.254 20:10:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:03.254 20:10:33 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:03.254 20:10:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.254 20:10:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.254 20:10:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.254 20:10:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.254 20:10:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.254 20:10:33 -- setup/common.sh@18 -- # local node=0 00:04:03.254 20:10:33 -- setup/common.sh@19 -- # local var val 00:04:03.254 20:10:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:03.254 20:10:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.254 20:10:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.254 20:10:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.254 20:10:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.254 20:10:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7485304 kB' 'MemUsed: 4756668 kB' 'SwapCached: 0 kB' 'Active: 859992 kB' 'Inactive: 1474128 kB' 'Active(anon): 133584 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474128 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 648 kB' 'Writeback: 0 kB' 'FilePages: 2211008 kB' 'Mapped: 48904 kB' 'AnonPages: 124676 kB' 'Shmem: 10472 kB' 'KernelStack: 6448 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66216 kB' 'Slab: 143500 kB' 'SReclaimable: 66216 kB' 'SUnreclaim: 77284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.254 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.254 20:10:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # continue 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:03.255 20:10:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:03.255 20:10:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.255 20:10:33 -- setup/common.sh@33 -- # echo 0 00:04:03.255 20:10:33 -- setup/common.sh@33 -- # return 0 00:04:03.255 node0=1024 expecting 1024 00:04:03.255 ************************************ 00:04:03.255 END TEST even_2G_alloc 00:04:03.255 ************************************ 00:04:03.255 20:10:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.255 20:10:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.255 20:10:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.255 20:10:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.255 20:10:33 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:03.255 20:10:33 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:03.255 00:04:03.255 real 0m0.954s 00:04:03.255 user 0m0.406s 00:04:03.255 sys 0m0.569s 00:04:03.255 20:10:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:03.255 20:10:33 -- common/autotest_common.sh@10 -- # set +x 00:04:03.255 20:10:33 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:03.255 20:10:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.255 20:10:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.255 20:10:33 -- common/autotest_common.sh@10 -- # set +x 00:04:03.514 ************************************ 00:04:03.514 START TEST odd_alloc 00:04:03.514 ************************************ 00:04:03.514 20:10:33 -- common/autotest_common.sh@1111 -- # odd_alloc 00:04:03.514 20:10:33 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:03.514 20:10:33 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:03.514 20:10:33 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.514 20:10:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.514 20:10:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:03.514 20:10:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.514 20:10:33 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.514 20:10:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.514 20:10:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:03.514 20:10:33 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:03.514 20:10:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.514 20:10:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.514 20:10:33 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.514 20:10:33 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.514 20:10:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.514 20:10:33 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:03.514 20:10:33 -- setup/hugepages.sh@83 -- # : 0 00:04:03.514 20:10:33 -- setup/hugepages.sh@84 -- # : 0 00:04:03.514 20:10:33 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.514 20:10:33 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:03.514 20:10:33 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:03.514 20:10:33 -- setup/hugepages.sh@160 -- # setup output 00:04:03.514 20:10:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.514 20:10:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.083 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.083 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.083 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.083 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.083 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:04.083 20:10:34 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:04.083 20:10:34 -- setup/hugepages.sh@89 -- # local node 00:04:04.083 20:10:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.083 20:10:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.083 20:10:34 -- setup/hugepages.sh@92 -- # local surp 00:04:04.083 20:10:34 -- setup/hugepages.sh@93 -- # local resv 00:04:04.083 20:10:34 -- setup/hugepages.sh@94 -- # local anon 00:04:04.083 20:10:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.083 20:10:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.083 20:10:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.083 20:10:34 -- setup/common.sh@18 -- # local node= 00:04:04.083 20:10:34 -- setup/common.sh@19 -- # local var val 00:04:04.083 20:10:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.083 20:10:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.083 20:10:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.083 20:10:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.083 20:10:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.083 20:10:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.083 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 20:10:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7476932 kB' 'MemAvailable: 9473648 kB' 'Buffers: 2436 kB' 'Cached: 2208584 kB' 'SwapCached: 0 kB' 'Active: 860340 kB' 'Inactive: 1474140 kB' 'Active(anon): 133932 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'AnonPages: 125024 kB' 'Mapped: 49016 kB' 'Shmem: 10472 kB' 'KReclaimable: 66216 kB' 'Slab: 143520 kB' 'SReclaimable: 66216 kB' 'SUnreclaim: 77304 kB' 'KernelStack: 6520 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55204 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.084 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.084 20:10:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.346 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.346 20:10:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.347 20:10:34 -- setup/common.sh@33 -- # echo 0 00:04:04.347 20:10:34 -- setup/common.sh@33 -- # return 0 00:04:04.347 20:10:34 -- setup/hugepages.sh@97 -- # anon=0 00:04:04.347 20:10:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.347 20:10:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.347 20:10:34 -- setup/common.sh@18 -- # local node= 00:04:04.347 20:10:34 -- setup/common.sh@19 -- # local var val 00:04:04.347 20:10:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.347 20:10:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.347 20:10:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.347 20:10:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.347 20:10:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.347 20:10:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.347 20:10:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7476932 kB' 'MemAvailable: 9473648 kB' 'Buffers: 2436 kB' 'Cached: 2208584 kB' 'SwapCached: 0 kB' 'Active: 860268 kB' 'Inactive: 1474140 kB' 'Active(anon): 133860 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'AnonPages: 124996 kB' 'Mapped: 48916 kB' 'Shmem: 10472 kB' 'KReclaimable: 66216 kB' 'Slab: 143520 kB' 'SReclaimable: 66216 kB' 'SUnreclaim: 77304 kB' 'KernelStack: 6528 kB' 'PageTables: 4704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.347 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.347 20:10:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.348 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.348 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.349 20:10:34 -- setup/common.sh@33 -- # echo 0 00:04:04.349 20:10:34 -- setup/common.sh@33 -- # return 0 00:04:04.349 20:10:34 -- setup/hugepages.sh@99 -- # surp=0 00:04:04.349 20:10:34 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.349 20:10:34 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.349 20:10:34 -- setup/common.sh@18 -- # local node= 00:04:04.349 20:10:34 -- setup/common.sh@19 -- # local var val 00:04:04.349 20:10:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.349 20:10:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.349 20:10:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.349 20:10:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.349 20:10:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.349 20:10:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7476932 kB' 'MemAvailable: 9473648 kB' 'Buffers: 2436 kB' 'Cached: 2208584 kB' 'SwapCached: 0 kB' 'Active: 860388 kB' 'Inactive: 1474140 kB' 'Active(anon): 133980 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'AnonPages: 125076 kB' 'Mapped: 48916 kB' 'Shmem: 10472 kB' 'KReclaimable: 66216 kB' 'Slab: 143520 kB' 'SReclaimable: 66216 kB' 'SUnreclaim: 77304 kB' 'KernelStack: 6512 kB' 'PageTables: 4652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.349 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.349 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.350 20:10:34 -- setup/common.sh@33 -- # echo 0 00:04:04.350 20:10:34 -- setup/common.sh@33 -- # return 0 00:04:04.350 20:10:34 -- setup/hugepages.sh@100 -- # resv=0 00:04:04.350 nr_hugepages=1025 00:04:04.350 resv_hugepages=0 00:04:04.350 surplus_hugepages=0 00:04:04.350 anon_hugepages=0 00:04:04.350 20:10:34 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:04.350 20:10:34 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.350 20:10:34 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.350 20:10:34 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.350 20:10:34 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:04.350 20:10:34 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:04.350 20:10:34 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.350 20:10:34 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.350 20:10:34 -- setup/common.sh@18 -- # local node= 00:04:04.350 20:10:34 -- setup/common.sh@19 -- # local var val 00:04:04.350 20:10:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.350 20:10:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.350 20:10:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.350 20:10:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.350 20:10:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.350 20:10:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7476932 kB' 'MemAvailable: 9473648 kB' 'Buffers: 2436 kB' 'Cached: 2208584 kB' 'SwapCached: 0 kB' 'Active: 860260 kB' 'Inactive: 1474140 kB' 'Active(anon): 133852 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'AnonPages: 124996 kB' 'Mapped: 48916 kB' 'Shmem: 10472 kB' 'KReclaimable: 66216 kB' 'Slab: 143520 kB' 'SReclaimable: 66216 kB' 'SUnreclaim: 77304 kB' 'KernelStack: 6528 kB' 'PageTables: 4704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 355816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55172 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.350 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.350 20:10:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.351 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.351 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.352 20:10:34 -- setup/common.sh@33 -- # echo 1025 00:04:04.352 20:10:34 -- setup/common.sh@33 -- # return 0 00:04:04.352 20:10:34 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:04.352 20:10:34 -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.352 20:10:34 -- setup/hugepages.sh@27 -- # local node 00:04:04.352 20:10:34 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.352 20:10:34 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:04.352 20:10:34 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:04.352 20:10:34 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.352 20:10:34 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.352 20:10:34 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.352 20:10:34 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.352 20:10:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.352 20:10:34 -- setup/common.sh@18 -- # local node=0 00:04:04.352 20:10:34 -- setup/common.sh@19 -- # local var val 00:04:04.352 20:10:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:04.352 20:10:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.352 20:10:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.352 20:10:34 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.352 20:10:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.352 20:10:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7476932 kB' 'MemUsed: 4765040 kB' 'SwapCached: 0 kB' 'Active: 860216 kB' 'Inactive: 1474140 kB' 'Active(anon): 133808 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 828 kB' 'Writeback: 0 kB' 'FilePages: 2211020 kB' 'Mapped: 48916 kB' 'AnonPages: 124908 kB' 'Shmem: 10472 kB' 'KernelStack: 6496 kB' 'PageTables: 4600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66216 kB' 'Slab: 143520 kB' 'SReclaimable: 66216 kB' 'SUnreclaim: 77304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.352 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.352 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # continue 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:04.353 20:10:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:04.353 20:10:34 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.353 20:10:34 -- setup/common.sh@33 -- # echo 0 00:04:04.353 20:10:34 -- setup/common.sh@33 -- # return 0 00:04:04.353 20:10:34 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.353 20:10:34 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.353 20:10:34 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.353 20:10:34 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.353 node0=1025 expecting 1025 00:04:04.353 20:10:34 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:04.353 ************************************ 00:04:04.353 END TEST odd_alloc 00:04:04.353 ************************************ 00:04:04.353 20:10:34 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:04.353 00:04:04.353 real 0m0.962s 00:04:04.353 user 0m0.410s 00:04:04.353 sys 0m0.580s 00:04:04.353 20:10:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:04.353 20:10:34 -- common/autotest_common.sh@10 -- # set +x 00:04:04.353 20:10:34 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:04.353 20:10:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.353 20:10:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.353 20:10:34 -- common/autotest_common.sh@10 -- # set +x 00:04:04.612 ************************************ 00:04:04.612 START TEST custom_alloc 00:04:04.612 ************************************ 00:04:04.612 20:10:34 -- common/autotest_common.sh@1111 -- # custom_alloc 00:04:04.612 20:10:34 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:04.612 20:10:34 -- setup/hugepages.sh@169 -- # local node 00:04:04.612 20:10:34 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:04.612 20:10:34 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:04.612 20:10:34 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:04.612 20:10:34 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:04.612 20:10:34 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:04.612 20:10:34 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:04.612 20:10:34 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.612 20:10:34 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:04.612 20:10:34 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:04.612 20:10:34 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.612 20:10:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.612 20:10:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.612 20:10:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.612 20:10:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.612 20:10:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.612 20:10:34 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.612 20:10:34 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:04.612 20:10:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.612 20:10:34 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:04.613 20:10:34 -- setup/hugepages.sh@83 -- # : 0 00:04:04.613 20:10:34 -- setup/hugepages.sh@84 -- # : 0 00:04:04.613 20:10:34 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:04.613 20:10:34 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:04.613 20:10:34 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:04.613 20:10:34 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:04.613 20:10:34 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:04.613 20:10:34 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:04.613 20:10:34 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:04.613 20:10:34 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:04.613 20:10:34 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.613 20:10:34 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:04.613 20:10:34 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:04.613 20:10:34 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.613 20:10:34 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.613 20:10:34 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:04.613 20:10:34 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:04.613 20:10:34 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:04.613 20:10:34 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:04.613 20:10:34 -- setup/hugepages.sh@78 -- # return 0 00:04:04.613 20:10:34 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:04.613 20:10:34 -- setup/hugepages.sh@187 -- # setup output 00:04:04.613 20:10:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.613 20:10:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:05.180 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:05.180 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.180 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.180 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.180 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:05.443 20:10:35 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:05.443 20:10:35 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:05.443 20:10:35 -- setup/hugepages.sh@89 -- # local node 00:04:05.443 20:10:35 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.443 20:10:35 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.443 20:10:35 -- setup/hugepages.sh@92 -- # local surp 00:04:05.443 20:10:35 -- setup/hugepages.sh@93 -- # local resv 00:04:05.443 20:10:35 -- setup/hugepages.sh@94 -- # local anon 00:04:05.443 20:10:35 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.443 20:10:35 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.443 20:10:35 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.443 20:10:35 -- setup/common.sh@18 -- # local node= 00:04:05.443 20:10:35 -- setup/common.sh@19 -- # local var val 00:04:05.443 20:10:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.443 20:10:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.443 20:10:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.443 20:10:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.443 20:10:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.443 20:10:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8530472 kB' 'MemAvailable: 10527188 kB' 'Buffers: 2436 kB' 'Cached: 2208584 kB' 'SwapCached: 0 kB' 'Active: 856160 kB' 'Inactive: 1474140 kB' 'Active(anon): 129752 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 996 kB' 'Writeback: 0 kB' 'AnonPages: 120648 kB' 'Mapped: 48296 kB' 'Shmem: 10472 kB' 'KReclaimable: 66212 kB' 'Slab: 143404 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77192 kB' 'KernelStack: 6400 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 341164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.443 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.443 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.444 20:10:35 -- setup/common.sh@33 -- # echo 0 00:04:05.444 20:10:35 -- setup/common.sh@33 -- # return 0 00:04:05.444 20:10:35 -- setup/hugepages.sh@97 -- # anon=0 00:04:05.444 20:10:35 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.444 20:10:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.444 20:10:35 -- setup/common.sh@18 -- # local node= 00:04:05.444 20:10:35 -- setup/common.sh@19 -- # local var val 00:04:05.444 20:10:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.444 20:10:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.444 20:10:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.444 20:10:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.444 20:10:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.444 20:10:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8531084 kB' 'MemAvailable: 10527800 kB' 'Buffers: 2436 kB' 'Cached: 2208584 kB' 'SwapCached: 0 kB' 'Active: 856148 kB' 'Inactive: 1474140 kB' 'Active(anon): 129740 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 996 kB' 'Writeback: 0 kB' 'AnonPages: 120636 kB' 'Mapped: 48296 kB' 'Shmem: 10472 kB' 'KReclaimable: 66212 kB' 'Slab: 143404 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77192 kB' 'KernelStack: 6384 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 341164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.444 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.444 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.445 20:10:35 -- setup/common.sh@33 -- # echo 0 00:04:05.445 20:10:35 -- setup/common.sh@33 -- # return 0 00:04:05.445 20:10:35 -- setup/hugepages.sh@99 -- # surp=0 00:04:05.445 20:10:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.445 20:10:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.445 20:10:35 -- setup/common.sh@18 -- # local node= 00:04:05.445 20:10:35 -- setup/common.sh@19 -- # local var val 00:04:05.445 20:10:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.445 20:10:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.445 20:10:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.445 20:10:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.445 20:10:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.445 20:10:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8531752 kB' 'MemAvailable: 10528468 kB' 'Buffers: 2436 kB' 'Cached: 2208584 kB' 'SwapCached: 0 kB' 'Active: 855752 kB' 'Inactive: 1474140 kB' 'Active(anon): 129344 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 996 kB' 'Writeback: 0 kB' 'AnonPages: 120720 kB' 'Mapped: 48188 kB' 'Shmem: 10472 kB' 'KReclaimable: 66212 kB' 'Slab: 143400 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77188 kB' 'KernelStack: 6368 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 341164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.445 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.445 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.446 20:10:35 -- setup/common.sh@33 -- # echo 0 00:04:05.446 20:10:35 -- setup/common.sh@33 -- # return 0 00:04:05.446 20:10:35 -- setup/hugepages.sh@100 -- # resv=0 00:04:05.446 20:10:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:05.446 nr_hugepages=512 00:04:05.446 20:10:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.446 resv_hugepages=0 00:04:05.446 20:10:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.446 surplus_hugepages=0 00:04:05.446 anon_hugepages=0 00:04:05.446 20:10:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.446 20:10:35 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.446 20:10:35 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:05.446 20:10:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.446 20:10:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.446 20:10:35 -- setup/common.sh@18 -- # local node= 00:04:05.446 20:10:35 -- setup/common.sh@19 -- # local var val 00:04:05.446 20:10:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.446 20:10:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.446 20:10:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.446 20:10:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.446 20:10:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.446 20:10:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8531752 kB' 'MemAvailable: 10528468 kB' 'Buffers: 2436 kB' 'Cached: 2208584 kB' 'SwapCached: 0 kB' 'Active: 855720 kB' 'Inactive: 1474140 kB' 'Active(anon): 129312 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 996 kB' 'Writeback: 0 kB' 'AnonPages: 120684 kB' 'Mapped: 48188 kB' 'Shmem: 10472 kB' 'KReclaimable: 66212 kB' 'Slab: 143400 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77188 kB' 'KernelStack: 6352 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 341164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.446 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.446 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.447 20:10:35 -- setup/common.sh@33 -- # echo 512 00:04:05.447 20:10:35 -- setup/common.sh@33 -- # return 0 00:04:05.447 20:10:35 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:05.447 20:10:35 -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.447 20:10:35 -- setup/hugepages.sh@27 -- # local node 00:04:05.447 20:10:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.447 20:10:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:05.447 20:10:35 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:05.447 20:10:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.447 20:10:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.447 20:10:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.447 20:10:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.447 20:10:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.447 20:10:35 -- setup/common.sh@18 -- # local node=0 00:04:05.447 20:10:35 -- setup/common.sh@19 -- # local var val 00:04:05.447 20:10:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:05.447 20:10:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.447 20:10:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.447 20:10:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.447 20:10:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.447 20:10:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8531752 kB' 'MemUsed: 3710220 kB' 'SwapCached: 0 kB' 'Active: 855940 kB' 'Inactive: 1474140 kB' 'Active(anon): 129532 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 996 kB' 'Writeback: 0 kB' 'FilePages: 2211020 kB' 'Mapped: 48188 kB' 'AnonPages: 120632 kB' 'Shmem: 10472 kB' 'KernelStack: 6352 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66212 kB' 'Slab: 143400 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77188 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.447 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.447 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.448 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.448 20:10:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.448 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.448 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.448 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.448 20:10:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.448 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.448 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.448 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.448 20:10:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.448 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.448 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.448 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.448 20:10:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.448 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.448 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.448 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.448 20:10:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.448 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.448 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.448 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.448 20:10:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.448 20:10:35 -- setup/common.sh@32 -- # continue 00:04:05.448 20:10:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:05.448 20:10:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:05.448 20:10:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.448 20:10:35 -- setup/common.sh@33 -- # echo 0 00:04:05.448 20:10:35 -- setup/common.sh@33 -- # return 0 00:04:05.448 20:10:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.448 20:10:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.448 20:10:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.448 20:10:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.448 20:10:35 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:05.448 node0=512 expecting 512 00:04:05.448 20:10:35 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:05.448 00:04:05.448 real 0m0.948s 00:04:05.448 user 0m0.440s 00:04:05.448 sys 0m0.574s 00:04:05.448 20:10:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:05.448 20:10:35 -- common/autotest_common.sh@10 -- # set +x 00:04:05.448 ************************************ 00:04:05.448 END TEST custom_alloc 00:04:05.448 ************************************ 00:04:05.448 20:10:35 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:05.448 20:10:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:05.448 20:10:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:05.448 20:10:35 -- common/autotest_common.sh@10 -- # set +x 00:04:05.706 ************************************ 00:04:05.706 START TEST no_shrink_alloc 00:04:05.706 ************************************ 00:04:05.706 20:10:35 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:04:05.706 20:10:35 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:05.706 20:10:35 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:05.706 20:10:35 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:05.706 20:10:35 -- setup/hugepages.sh@51 -- # shift 00:04:05.706 20:10:35 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:05.706 20:10:35 -- setup/hugepages.sh@52 -- # local node_ids 00:04:05.706 20:10:35 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:05.706 20:10:35 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:05.706 20:10:35 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:05.706 20:10:35 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:05.706 20:10:35 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:05.706 20:10:35 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:05.706 20:10:35 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:05.706 20:10:35 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:05.706 20:10:35 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:05.706 20:10:35 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:05.706 20:10:35 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.706 20:10:35 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:05.706 20:10:35 -- setup/hugepages.sh@73 -- # return 0 00:04:05.706 20:10:35 -- setup/hugepages.sh@198 -- # setup output 00:04:05.706 20:10:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.706 20:10:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:06.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.536 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.536 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.536 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.536 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:06.536 20:10:36 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:06.536 20:10:36 -- setup/hugepages.sh@89 -- # local node 00:04:06.536 20:10:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.536 20:10:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.536 20:10:36 -- setup/hugepages.sh@92 -- # local surp 00:04:06.536 20:10:36 -- setup/hugepages.sh@93 -- # local resv 00:04:06.536 20:10:36 -- setup/hugepages.sh@94 -- # local anon 00:04:06.536 20:10:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.536 20:10:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.536 20:10:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.536 20:10:36 -- setup/common.sh@18 -- # local node= 00:04:06.536 20:10:36 -- setup/common.sh@19 -- # local var val 00:04:06.536 20:10:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.536 20:10:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.536 20:10:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.536 20:10:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.536 20:10:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.536 20:10:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.536 20:10:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7487976 kB' 'MemAvailable: 9484692 kB' 'Buffers: 2436 kB' 'Cached: 2208584 kB' 'SwapCached: 0 kB' 'Active: 856148 kB' 'Inactive: 1474140 kB' 'Active(anon): 129740 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1144 kB' 'Writeback: 0 kB' 'AnonPages: 121100 kB' 'Mapped: 48236 kB' 'Shmem: 10472 kB' 'KReclaimable: 66212 kB' 'Slab: 143372 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77160 kB' 'KernelStack: 6368 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 341164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.536 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.536 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.537 20:10:36 -- setup/common.sh@33 -- # echo 0 00:04:06.537 20:10:36 -- setup/common.sh@33 -- # return 0 00:04:06.537 20:10:36 -- setup/hugepages.sh@97 -- # anon=0 00:04:06.537 20:10:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.537 20:10:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.537 20:10:36 -- setup/common.sh@18 -- # local node= 00:04:06.537 20:10:36 -- setup/common.sh@19 -- # local var val 00:04:06.537 20:10:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.537 20:10:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.537 20:10:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.537 20:10:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.537 20:10:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.537 20:10:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.537 20:10:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7487976 kB' 'MemAvailable: 9484692 kB' 'Buffers: 2436 kB' 'Cached: 2208584 kB' 'SwapCached: 0 kB' 'Active: 855772 kB' 'Inactive: 1474140 kB' 'Active(anon): 129364 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1144 kB' 'Writeback: 0 kB' 'AnonPages: 120756 kB' 'Mapped: 48196 kB' 'Shmem: 10472 kB' 'KReclaimable: 66212 kB' 'Slab: 143372 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77160 kB' 'KernelStack: 6368 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 341164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.537 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.537 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.538 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.538 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.539 20:10:36 -- setup/common.sh@33 -- # echo 0 00:04:06.539 20:10:36 -- setup/common.sh@33 -- # return 0 00:04:06.539 20:10:36 -- setup/hugepages.sh@99 -- # surp=0 00:04:06.539 20:10:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.539 20:10:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.539 20:10:36 -- setup/common.sh@18 -- # local node= 00:04:06.539 20:10:36 -- setup/common.sh@19 -- # local var val 00:04:06.539 20:10:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.539 20:10:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.539 20:10:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.539 20:10:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.539 20:10:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.539 20:10:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7487976 kB' 'MemAvailable: 9484692 kB' 'Buffers: 2436 kB' 'Cached: 2208584 kB' 'SwapCached: 0 kB' 'Active: 855944 kB' 'Inactive: 1474140 kB' 'Active(anon): 129536 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1144 kB' 'Writeback: 0 kB' 'AnonPages: 120636 kB' 'Mapped: 48196 kB' 'Shmem: 10472 kB' 'KReclaimable: 66212 kB' 'Slab: 143372 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77160 kB' 'KernelStack: 6352 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 341164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.539 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.539 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.540 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.540 20:10:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.540 20:10:36 -- setup/common.sh@33 -- # echo 0 00:04:06.540 20:10:36 -- setup/common.sh@33 -- # return 0 00:04:06.540 20:10:36 -- setup/hugepages.sh@100 -- # resv=0 00:04:06.540 20:10:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.540 nr_hugepages=1024 00:04:06.540 resv_hugepages=0 00:04:06.540 20:10:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.540 surplus_hugepages=0 00:04:06.540 20:10:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.540 anon_hugepages=0 00:04:06.540 20:10:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.540 20:10:36 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.540 20:10:36 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.540 20:10:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.540 20:10:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.540 20:10:36 -- setup/common.sh@18 -- # local node= 00:04:06.540 20:10:36 -- setup/common.sh@19 -- # local var val 00:04:06.540 20:10:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.540 20:10:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.540 20:10:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.540 20:10:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.540 20:10:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.540 20:10:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7487976 kB' 'MemAvailable: 9484692 kB' 'Buffers: 2436 kB' 'Cached: 2208584 kB' 'SwapCached: 0 kB' 'Active: 855944 kB' 'Inactive: 1474140 kB' 'Active(anon): 129536 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1144 kB' 'Writeback: 0 kB' 'AnonPages: 120636 kB' 'Mapped: 48196 kB' 'Shmem: 10472 kB' 'KReclaimable: 66212 kB' 'Slab: 143372 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77160 kB' 'KernelStack: 6352 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 341164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55092 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.541 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.541 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.542 20:10:36 -- setup/common.sh@33 -- # echo 1024 00:04:06.542 20:10:36 -- setup/common.sh@33 -- # return 0 00:04:06.542 20:10:36 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.542 20:10:36 -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.542 20:10:36 -- setup/hugepages.sh@27 -- # local node 00:04:06.542 20:10:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.542 20:10:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.542 20:10:36 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:06.542 20:10:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.542 20:10:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.542 20:10:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.542 20:10:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.542 20:10:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.542 20:10:36 -- setup/common.sh@18 -- # local node=0 00:04:06.542 20:10:36 -- setup/common.sh@19 -- # local var val 00:04:06.542 20:10:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:06.542 20:10:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.542 20:10:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.542 20:10:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.542 20:10:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.542 20:10:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7487976 kB' 'MemUsed: 4753996 kB' 'SwapCached: 0 kB' 'Active: 855920 kB' 'Inactive: 1474140 kB' 'Active(anon): 129512 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474140 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1144 kB' 'Writeback: 0 kB' 'FilePages: 2211020 kB' 'Mapped: 48196 kB' 'AnonPages: 120872 kB' 'Shmem: 10472 kB' 'KernelStack: 6336 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66212 kB' 'Slab: 143372 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.542 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.542 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # continue 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:06.543 20:10:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:06.543 20:10:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.543 20:10:36 -- setup/common.sh@33 -- # echo 0 00:04:06.543 20:10:36 -- setup/common.sh@33 -- # return 0 00:04:06.543 20:10:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.543 20:10:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.543 20:10:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.543 20:10:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.543 20:10:36 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.543 node0=1024 expecting 1024 00:04:06.543 20:10:36 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.543 20:10:36 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:06.543 20:10:36 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:06.543 20:10:36 -- setup/hugepages.sh@202 -- # setup output 00:04:06.543 20:10:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.543 20:10:36 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.110 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.373 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.373 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.373 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.373 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:07.373 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:07.373 20:10:37 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:07.373 20:10:37 -- setup/hugepages.sh@89 -- # local node 00:04:07.373 20:10:37 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.373 20:10:37 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.373 20:10:37 -- setup/hugepages.sh@92 -- # local surp 00:04:07.373 20:10:37 -- setup/hugepages.sh@93 -- # local resv 00:04:07.373 20:10:37 -- setup/hugepages.sh@94 -- # local anon 00:04:07.373 20:10:37 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.373 20:10:37 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.373 20:10:37 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.373 20:10:37 -- setup/common.sh@18 -- # local node= 00:04:07.373 20:10:37 -- setup/common.sh@19 -- # local var val 00:04:07.373 20:10:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.373 20:10:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.373 20:10:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.373 20:10:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.373 20:10:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.373 20:10:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7491336 kB' 'MemAvailable: 9488056 kB' 'Buffers: 2436 kB' 'Cached: 2208588 kB' 'SwapCached: 0 kB' 'Active: 856424 kB' 'Inactive: 1474144 kB' 'Active(anon): 130016 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1132 kB' 'Writeback: 0 kB' 'AnonPages: 121120 kB' 'Mapped: 48464 kB' 'Shmem: 10472 kB' 'KReclaimable: 66212 kB' 'Slab: 143396 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77184 kB' 'KernelStack: 6368 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 341164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55108 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.373 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.373 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.374 20:10:37 -- setup/common.sh@33 -- # echo 0 00:04:07.374 20:10:37 -- setup/common.sh@33 -- # return 0 00:04:07.374 20:10:37 -- setup/hugepages.sh@97 -- # anon=0 00:04:07.374 20:10:37 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.374 20:10:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.374 20:10:37 -- setup/common.sh@18 -- # local node= 00:04:07.374 20:10:37 -- setup/common.sh@19 -- # local var val 00:04:07.374 20:10:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.374 20:10:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.374 20:10:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.374 20:10:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.374 20:10:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.374 20:10:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.374 20:10:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7491336 kB' 'MemAvailable: 9488056 kB' 'Buffers: 2436 kB' 'Cached: 2208588 kB' 'SwapCached: 0 kB' 'Active: 855884 kB' 'Inactive: 1474144 kB' 'Active(anon): 129476 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1132 kB' 'Writeback: 0 kB' 'AnonPages: 120608 kB' 'Mapped: 48196 kB' 'Shmem: 10472 kB' 'KReclaimable: 66212 kB' 'Slab: 143396 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77184 kB' 'KernelStack: 6376 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 341164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.374 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.374 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.375 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.375 20:10:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.375 20:10:37 -- setup/common.sh@33 -- # echo 0 00:04:07.375 20:10:37 -- setup/common.sh@33 -- # return 0 00:04:07.375 20:10:37 -- setup/hugepages.sh@99 -- # surp=0 00:04:07.375 20:10:37 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.376 20:10:37 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.376 20:10:37 -- setup/common.sh@18 -- # local node= 00:04:07.376 20:10:37 -- setup/common.sh@19 -- # local var val 00:04:07.376 20:10:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.376 20:10:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.376 20:10:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.376 20:10:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.376 20:10:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.376 20:10:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7491336 kB' 'MemAvailable: 9488056 kB' 'Buffers: 2436 kB' 'Cached: 2208588 kB' 'SwapCached: 0 kB' 'Active: 855772 kB' 'Inactive: 1474144 kB' 'Active(anon): 129364 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1132 kB' 'Writeback: 0 kB' 'AnonPages: 120724 kB' 'Mapped: 48196 kB' 'Shmem: 10472 kB' 'KReclaimable: 66212 kB' 'Slab: 143396 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77184 kB' 'KernelStack: 6360 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 341164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.376 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.376 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.377 20:10:37 -- setup/common.sh@33 -- # echo 0 00:04:07.377 20:10:37 -- setup/common.sh@33 -- # return 0 00:04:07.377 20:10:37 -- setup/hugepages.sh@100 -- # resv=0 00:04:07.377 20:10:37 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.377 nr_hugepages=1024 00:04:07.377 resv_hugepages=0 00:04:07.377 20:10:37 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.377 surplus_hugepages=0 00:04:07.377 20:10:37 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.377 20:10:37 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.377 anon_hugepages=0 00:04:07.377 20:10:37 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.377 20:10:37 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.377 20:10:37 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.377 20:10:37 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.377 20:10:37 -- setup/common.sh@18 -- # local node= 00:04:07.377 20:10:37 -- setup/common.sh@19 -- # local var val 00:04:07.377 20:10:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.377 20:10:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.377 20:10:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.377 20:10:37 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.377 20:10:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.377 20:10:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7491336 kB' 'MemAvailable: 9488056 kB' 'Buffers: 2436 kB' 'Cached: 2208588 kB' 'SwapCached: 0 kB' 'Active: 855856 kB' 'Inactive: 1474144 kB' 'Active(anon): 129448 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1132 kB' 'Writeback: 0 kB' 'AnonPages: 120552 kB' 'Mapped: 48196 kB' 'Shmem: 10472 kB' 'KReclaimable: 66212 kB' 'Slab: 143396 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77184 kB' 'KernelStack: 6376 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 341164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 6107136 kB' 'DirectMap1G: 8388608 kB' 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.377 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.377 20:10:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.378 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.378 20:10:37 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.638 20:10:37 -- setup/common.sh@33 -- # echo 1024 00:04:07.638 20:10:37 -- setup/common.sh@33 -- # return 0 00:04:07.638 20:10:37 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.638 20:10:37 -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.638 20:10:37 -- setup/hugepages.sh@27 -- # local node 00:04:07.638 20:10:37 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.638 20:10:37 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.638 20:10:37 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:07.638 20:10:37 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.638 20:10:37 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.638 20:10:37 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.638 20:10:37 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.638 20:10:37 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.638 20:10:37 -- setup/common.sh@18 -- # local node=0 00:04:07.638 20:10:37 -- setup/common.sh@19 -- # local var val 00:04:07.638 20:10:37 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.638 20:10:37 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.638 20:10:37 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.638 20:10:37 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.638 20:10:37 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.638 20:10:37 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 7491336 kB' 'MemUsed: 4750636 kB' 'SwapCached: 0 kB' 'Active: 855780 kB' 'Inactive: 1474144 kB' 'Active(anon): 129372 kB' 'Inactive(anon): 0 kB' 'Active(file): 726408 kB' 'Inactive(file): 1474144 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1132 kB' 'Writeback: 0 kB' 'FilePages: 2211024 kB' 'Mapped: 48196 kB' 'AnonPages: 120736 kB' 'Shmem: 10472 kB' 'KernelStack: 6360 kB' 'PageTables: 3884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 66212 kB' 'Slab: 143396 kB' 'SReclaimable: 66212 kB' 'SUnreclaim: 77184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.638 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.638 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # continue 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.639 20:10:37 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.639 20:10:37 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.639 20:10:37 -- setup/common.sh@33 -- # echo 0 00:04:07.639 20:10:37 -- setup/common.sh@33 -- # return 0 00:04:07.639 20:10:37 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.639 20:10:37 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.639 20:10:37 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.639 20:10:37 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.639 20:10:37 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.639 node0=1024 expecting 1024 00:04:07.639 20:10:37 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.639 00:04:07.639 real 0m1.889s 00:04:07.639 user 0m0.836s 00:04:07.639 sys 0m1.194s 00:04:07.639 20:10:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:07.639 20:10:37 -- common/autotest_common.sh@10 -- # set +x 00:04:07.639 ************************************ 00:04:07.639 END TEST no_shrink_alloc 00:04:07.639 ************************************ 00:04:07.639 20:10:37 -- setup/hugepages.sh@217 -- # clear_hp 00:04:07.639 20:10:37 -- setup/hugepages.sh@37 -- # local node hp 00:04:07.639 20:10:37 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.639 20:10:37 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.639 20:10:37 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.639 20:10:37 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.639 20:10:37 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.639 20:10:37 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:07.639 20:10:37 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:07.639 ************************************ 00:04:07.639 END TEST hugepages 00:04:07.639 ************************************ 00:04:07.639 00:04:07.639 real 0m8.588s 00:04:07.639 user 0m3.573s 00:04:07.639 sys 0m5.203s 00:04:07.639 20:10:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:07.639 20:10:37 -- common/autotest_common.sh@10 -- # set +x 00:04:07.639 20:10:37 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:07.639 20:10:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.639 20:10:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.639 20:10:37 -- common/autotest_common.sh@10 -- # set +x 00:04:07.639 ************************************ 00:04:07.639 START TEST driver 00:04:07.639 ************************************ 00:04:07.639 20:10:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:07.899 * Looking for test storage... 00:04:07.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:07.899 20:10:37 -- setup/driver.sh@68 -- # setup reset 00:04:07.899 20:10:37 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.899 20:10:37 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:14.505 20:10:44 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:14.505 20:10:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:14.505 20:10:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:14.505 20:10:44 -- common/autotest_common.sh@10 -- # set +x 00:04:14.505 ************************************ 00:04:14.505 START TEST guess_driver 00:04:14.505 ************************************ 00:04:14.505 20:10:44 -- common/autotest_common.sh@1111 -- # guess_driver 00:04:14.505 20:10:44 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:14.505 20:10:44 -- setup/driver.sh@47 -- # local fail=0 00:04:14.505 20:10:44 -- setup/driver.sh@49 -- # pick_driver 00:04:14.505 20:10:44 -- setup/driver.sh@36 -- # vfio 00:04:14.505 20:10:44 -- setup/driver.sh@21 -- # local iommu_grups 00:04:14.505 20:10:44 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:14.505 20:10:44 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:14.505 20:10:44 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:14.505 20:10:44 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:14.505 20:10:44 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:14.505 20:10:44 -- setup/driver.sh@32 -- # return 1 00:04:14.505 20:10:44 -- setup/driver.sh@38 -- # uio 00:04:14.505 20:10:44 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:14.505 20:10:44 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:14.505 20:10:44 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:14.505 20:10:44 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:14.505 20:10:44 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:14.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:14.505 20:10:44 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:14.505 Looking for driver=uio_pci_generic 00:04:14.505 20:10:44 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:14.505 20:10:44 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:14.505 20:10:44 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:14.506 20:10:44 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.506 20:10:44 -- setup/driver.sh@45 -- # setup output config 00:04:14.506 20:10:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:14.506 20:10:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:14.765 20:10:44 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:14.765 20:10:44 -- setup/driver.sh@58 -- # continue 00:04:14.765 20:10:44 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.732 20:10:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.732 20:10:45 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:15.732 20:10:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.732 20:10:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.732 20:10:45 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:15.732 20:10:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.732 20:10:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.732 20:10:45 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:15.732 20:10:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.732 20:10:45 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.732 20:10:45 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:15.732 20:10:45 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.732 20:10:45 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:15.732 20:10:45 -- setup/driver.sh@65 -- # setup reset 00:04:15.732 20:10:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.732 20:10:45 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:22.315 ************************************ 00:04:22.315 END TEST guess_driver 00:04:22.315 ************************************ 00:04:22.316 00:04:22.316 real 0m7.665s 00:04:22.316 user 0m0.899s 00:04:22.316 sys 0m1.916s 00:04:22.316 20:10:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.316 20:10:51 -- common/autotest_common.sh@10 -- # set +x 00:04:22.316 ************************************ 00:04:22.316 END TEST driver 00:04:22.316 ************************************ 00:04:22.316 00:04:22.316 real 0m14.149s 00:04:22.316 user 0m1.384s 00:04:22.316 sys 0m3.068s 00:04:22.316 20:10:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:22.316 20:10:52 -- common/autotest_common.sh@10 -- # set +x 00:04:22.316 20:10:52 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:22.316 20:10:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:22.316 20:10:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:22.316 20:10:52 -- common/autotest_common.sh@10 -- # set +x 00:04:22.316 ************************************ 00:04:22.316 START TEST devices 00:04:22.316 ************************************ 00:04:22.316 20:10:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:22.316 * Looking for test storage... 00:04:22.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:22.316 20:10:52 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:22.316 20:10:52 -- setup/devices.sh@192 -- # setup reset 00:04:22.316 20:10:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.316 20:10:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:23.694 20:10:53 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:23.694 20:10:53 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:23.694 20:10:53 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:23.694 20:10:53 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:23.694 20:10:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.694 20:10:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:23.694 20:10:53 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:23.694 20:10:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:23.694 20:10:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.694 20:10:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.694 20:10:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:23.694 20:10:53 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:23.694 20:10:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:23.694 20:10:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.694 20:10:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.694 20:10:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:23.694 20:10:53 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:23.694 20:10:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:23.694 20:10:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.694 20:10:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.694 20:10:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:04:23.694 20:10:53 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:04:23.694 20:10:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:23.694 20:10:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.694 20:10:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.694 20:10:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:04:23.694 20:10:53 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:04:23.694 20:10:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:23.694 20:10:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.694 20:10:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.694 20:10:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:23.694 20:10:53 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:23.694 20:10:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:23.694 20:10:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.694 20:10:53 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:23.694 20:10:53 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:23.694 20:10:53 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:23.694 20:10:53 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:23.694 20:10:53 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:23.694 20:10:53 -- setup/devices.sh@196 -- # blocks=() 00:04:23.694 20:10:53 -- setup/devices.sh@196 -- # declare -a blocks 00:04:23.694 20:10:53 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:23.694 20:10:53 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:23.694 20:10:53 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:23.694 20:10:53 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.694 20:10:53 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:23.694 20:10:53 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:23.694 20:10:53 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:04:23.694 20:10:53 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:04:23.694 20:10:53 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:23.694 20:10:53 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:23.694 20:10:53 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:23.694 No valid GPT data, bailing 00:04:23.694 20:10:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:23.694 20:10:53 -- scripts/common.sh@391 -- # pt= 00:04:23.694 20:10:53 -- scripts/common.sh@392 -- # return 1 00:04:23.694 20:10:53 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:23.694 20:10:53 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:23.694 20:10:53 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:23.694 20:10:53 -- setup/common.sh@80 -- # echo 5368709120 00:04:23.694 20:10:53 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:23.694 20:10:53 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.694 20:10:53 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:04:23.694 20:10:53 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.694 20:10:53 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:23.694 20:10:53 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:23.694 20:10:53 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:04:23.694 20:10:53 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:04:23.694 20:10:53 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:23.694 20:10:53 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:23.694 20:10:53 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:23.694 No valid GPT data, bailing 00:04:23.953 20:10:53 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:23.953 20:10:53 -- scripts/common.sh@391 -- # pt= 00:04:23.953 20:10:53 -- scripts/common.sh@392 -- # return 1 00:04:23.953 20:10:53 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:23.953 20:10:53 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:23.953 20:10:53 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:23.953 20:10:53 -- setup/common.sh@80 -- # echo 6343335936 00:04:23.953 20:10:53 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:23.953 20:10:53 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.953 20:10:53 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:04:23.953 20:10:53 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.953 20:10:53 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:23.953 20:10:53 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:23.953 20:10:53 -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:23.953 20:10:53 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:23.953 20:10:53 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:23.953 20:10:53 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:04:23.953 20:10:53 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:23.953 No valid GPT data, bailing 00:04:23.953 20:10:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:23.953 20:10:54 -- scripts/common.sh@391 -- # pt= 00:04:23.953 20:10:54 -- scripts/common.sh@392 -- # return 1 00:04:23.953 20:10:54 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:23.953 20:10:54 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:23.953 20:10:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:23.953 20:10:54 -- setup/common.sh@80 -- # echo 4294967296 00:04:23.953 20:10:54 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:23.954 20:10:54 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.954 20:10:54 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:23.954 20:10:54 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.954 20:10:54 -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:04:23.954 20:10:54 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:23.954 20:10:54 -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:23.954 20:10:54 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:23.954 20:10:54 -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:04:23.954 20:10:54 -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:04:23.954 20:10:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:04:23.954 No valid GPT data, bailing 00:04:23.954 20:10:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:23.954 20:10:54 -- scripts/common.sh@391 -- # pt= 00:04:23.954 20:10:54 -- scripts/common.sh@392 -- # return 1 00:04:23.954 20:10:54 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:04:23.954 20:10:54 -- setup/common.sh@76 -- # local dev=nvme2n2 00:04:23.954 20:10:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:04:23.954 20:10:54 -- setup/common.sh@80 -- # echo 4294967296 00:04:23.954 20:10:54 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:23.954 20:10:54 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:23.954 20:10:54 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:23.954 20:10:54 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:23.954 20:10:54 -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:04:23.954 20:10:54 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:23.954 20:10:54 -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:04:23.954 20:10:54 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:04:23.954 20:10:54 -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:04:23.954 20:10:54 -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:04:23.954 20:10:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:04:23.954 No valid GPT data, bailing 00:04:23.954 20:10:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:24.215 20:10:54 -- scripts/common.sh@391 -- # pt= 00:04:24.215 20:10:54 -- scripts/common.sh@392 -- # return 1 00:04:24.215 20:10:54 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:04:24.215 20:10:54 -- setup/common.sh@76 -- # local dev=nvme2n3 00:04:24.215 20:10:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:04:24.215 20:10:54 -- setup/common.sh@80 -- # echo 4294967296 00:04:24.215 20:10:54 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:24.215 20:10:54 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:24.215 20:10:54 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:04:24.215 20:10:54 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:24.215 20:10:54 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:24.215 20:10:54 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:24.215 20:10:54 -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:04:24.215 20:10:54 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:04:24.215 20:10:54 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:24.215 20:10:54 -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:04:24.215 20:10:54 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:24.215 No valid GPT data, bailing 00:04:24.215 20:10:54 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:24.215 20:10:54 -- scripts/common.sh@391 -- # pt= 00:04:24.215 20:10:54 -- scripts/common.sh@392 -- # return 1 00:04:24.215 20:10:54 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:24.215 20:10:54 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:24.215 20:10:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:24.215 20:10:54 -- setup/common.sh@80 -- # echo 1073741824 00:04:24.215 20:10:54 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:24.215 20:10:54 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:24.215 20:10:54 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:24.215 20:10:54 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:24.215 20:10:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:24.215 20:10:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:24.215 20:10:54 -- common/autotest_common.sh@10 -- # set +x 00:04:24.215 ************************************ 00:04:24.215 START TEST nvme_mount 00:04:24.215 ************************************ 00:04:24.215 20:10:54 -- common/autotest_common.sh@1111 -- # nvme_mount 00:04:24.215 20:10:54 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:24.215 20:10:54 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:24.215 20:10:54 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:24.215 20:10:54 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:24.215 20:10:54 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:24.215 20:10:54 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:24.215 20:10:54 -- setup/common.sh@40 -- # local part_no=1 00:04:24.215 20:10:54 -- setup/common.sh@41 -- # local size=1073741824 00:04:24.215 20:10:54 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:24.215 20:10:54 -- setup/common.sh@44 -- # parts=() 00:04:24.215 20:10:54 -- setup/common.sh@44 -- # local parts 00:04:24.215 20:10:54 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:24.215 20:10:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.215 20:10:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:24.215 20:10:54 -- setup/common.sh@46 -- # (( part++ )) 00:04:24.215 20:10:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:24.215 20:10:54 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:24.215 20:10:54 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:24.215 20:10:54 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:25.592 Creating new GPT entries in memory. 00:04:25.592 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:25.592 other utilities. 00:04:25.592 20:10:55 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:25.592 20:10:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.592 20:10:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.592 20:10:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.592 20:10:55 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:26.529 Creating new GPT entries in memory. 00:04:26.529 The operation has completed successfully. 00:04:26.529 20:10:56 -- setup/common.sh@57 -- # (( part++ )) 00:04:26.529 20:10:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.529 20:10:56 -- setup/common.sh@62 -- # wait 58885 00:04:26.529 20:10:56 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.529 20:10:56 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:26.529 20:10:56 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.529 20:10:56 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:26.529 20:10:56 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:26.529 20:10:56 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.529 20:10:56 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.529 20:10:56 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:26.529 20:10:56 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:26.529 20:10:56 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.529 20:10:56 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.529 20:10:56 -- setup/devices.sh@53 -- # local found=0 00:04:26.529 20:10:56 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:26.529 20:10:56 -- setup/devices.sh@56 -- # : 00:04:26.529 20:10:56 -- setup/devices.sh@59 -- # local pci status 00:04:26.529 20:10:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.529 20:10:56 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:26.529 20:10:56 -- setup/devices.sh@47 -- # setup output config 00:04:26.529 20:10:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.529 20:10:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:26.788 20:10:56 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:26.788 20:10:56 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:26.788 20:10:56 -- setup/devices.sh@63 -- # found=1 00:04:26.788 20:10:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.788 20:10:56 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:26.788 20:10:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.047 20:10:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.048 20:10:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.048 20:10:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.048 20:10:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.048 20:10:57 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.048 20:10:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.307 20:10:57 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:27.307 20:10:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.566 20:10:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.566 20:10:57 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:27.566 20:10:57 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.566 20:10:57 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.566 20:10:57 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:27.566 20:10:57 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:27.566 20:10:57 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.566 20:10:57 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:27.825 20:10:57 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.825 20:10:57 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:27.825 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.825 20:10:57 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.825 20:10:57 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:28.084 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:28.084 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:28.084 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:28.085 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:28.085 20:10:58 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:28.085 20:10:58 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:28.085 20:10:58 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.085 20:10:58 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:28.085 20:10:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:28.085 20:10:58 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.085 20:10:58 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:28.085 20:10:58 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:28.085 20:10:58 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:28.085 20:10:58 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:28.085 20:10:58 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:28.085 20:10:58 -- setup/devices.sh@53 -- # local found=0 00:04:28.085 20:10:58 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.085 20:10:58 -- setup/devices.sh@56 -- # : 00:04:28.085 20:10:58 -- setup/devices.sh@59 -- # local pci status 00:04:28.085 20:10:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.085 20:10:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:28.085 20:10:58 -- setup/devices.sh@47 -- # setup output config 00:04:28.085 20:10:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.085 20:10:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:28.344 20:10:58 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.344 20:10:58 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:28.344 20:10:58 -- setup/devices.sh@63 -- # found=1 00:04:28.344 20:10:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.344 20:10:58 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.344 20:10:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.602 20:10:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.602 20:10:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.602 20:10:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.602 20:10:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.602 20:10:58 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:28.602 20:10:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.171 20:10:59 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.171 20:10:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.171 20:10:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.171 20:10:59 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:29.171 20:10:59 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.171 20:10:59 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.171 20:10:59 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:29.171 20:10:59 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.171 20:10:59 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:29.171 20:10:59 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:29.171 20:10:59 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:29.171 20:10:59 -- setup/devices.sh@50 -- # local mount_point= 00:04:29.171 20:10:59 -- setup/devices.sh@51 -- # local test_file= 00:04:29.171 20:10:59 -- setup/devices.sh@53 -- # local found=0 00:04:29.171 20:10:59 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:29.171 20:10:59 -- setup/devices.sh@59 -- # local pci status 00:04:29.171 20:10:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.171 20:10:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:29.171 20:10:59 -- setup/devices.sh@47 -- # setup output config 00:04:29.171 20:10:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.171 20:10:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.739 20:10:59 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.739 20:10:59 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:29.739 20:10:59 -- setup/devices.sh@63 -- # found=1 00:04:29.739 20:10:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.739 20:10:59 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.739 20:10:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.739 20:10:59 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.739 20:10:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.997 20:11:00 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.997 20:11:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.997 20:11:00 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:29.997 20:11:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.257 20:11:00 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:30.257 20:11:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.516 20:11:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.516 20:11:00 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:30.516 20:11:00 -- setup/devices.sh@68 -- # return 0 00:04:30.516 20:11:00 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:30.516 20:11:00 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:30.516 20:11:00 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.516 20:11:00 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.516 20:11:00 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:30.516 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.516 00:04:30.516 real 0m6.296s 00:04:30.516 user 0m1.626s 00:04:30.516 sys 0m2.364s 00:04:30.516 20:11:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:30.516 ************************************ 00:04:30.516 END TEST nvme_mount 00:04:30.516 ************************************ 00:04:30.516 20:11:00 -- common/autotest_common.sh@10 -- # set +x 00:04:30.516 20:11:00 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:30.516 20:11:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.516 20:11:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.516 20:11:00 -- common/autotest_common.sh@10 -- # set +x 00:04:30.776 ************************************ 00:04:30.776 START TEST dm_mount 00:04:30.776 ************************************ 00:04:30.776 20:11:00 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:30.776 20:11:00 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:30.776 20:11:00 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:30.776 20:11:00 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:30.776 20:11:00 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:30.776 20:11:00 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:30.776 20:11:00 -- setup/common.sh@40 -- # local part_no=2 00:04:30.776 20:11:00 -- setup/common.sh@41 -- # local size=1073741824 00:04:30.776 20:11:00 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:30.776 20:11:00 -- setup/common.sh@44 -- # parts=() 00:04:30.776 20:11:00 -- setup/common.sh@44 -- # local parts 00:04:30.776 20:11:00 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:30.776 20:11:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.776 20:11:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:30.776 20:11:00 -- setup/common.sh@46 -- # (( part++ )) 00:04:30.776 20:11:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.776 20:11:00 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:30.776 20:11:00 -- setup/common.sh@46 -- # (( part++ )) 00:04:30.776 20:11:00 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:30.776 20:11:00 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:30.776 20:11:00 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:30.776 20:11:00 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:31.713 Creating new GPT entries in memory. 00:04:31.713 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:31.713 other utilities. 00:04:31.713 20:11:01 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:31.713 20:11:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.713 20:11:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:31.713 20:11:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:31.713 20:11:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:33.092 Creating new GPT entries in memory. 00:04:33.092 The operation has completed successfully. 00:04:33.092 20:11:02 -- setup/common.sh@57 -- # (( part++ )) 00:04:33.092 20:11:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:33.093 20:11:02 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:33.093 20:11:02 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:33.093 20:11:02 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:34.030 The operation has completed successfully. 00:04:34.030 20:11:03 -- setup/common.sh@57 -- # (( part++ )) 00:04:34.030 20:11:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.030 20:11:03 -- setup/common.sh@62 -- # wait 59527 00:04:34.030 20:11:03 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:34.030 20:11:03 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.030 20:11:03 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:34.030 20:11:03 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:34.030 20:11:04 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:34.030 20:11:04 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:34.030 20:11:04 -- setup/devices.sh@161 -- # break 00:04:34.030 20:11:04 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:34.030 20:11:04 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:34.030 20:11:04 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:34.030 20:11:04 -- setup/devices.sh@166 -- # dm=dm-0 00:04:34.030 20:11:04 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:34.030 20:11:04 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:34.030 20:11:04 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.030 20:11:04 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:34.030 20:11:04 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.030 20:11:04 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:34.030 20:11:04 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:34.030 20:11:04 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.030 20:11:04 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:34.030 20:11:04 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:34.030 20:11:04 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:34.030 20:11:04 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.030 20:11:04 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:34.030 20:11:04 -- setup/devices.sh@53 -- # local found=0 00:04:34.030 20:11:04 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:34.030 20:11:04 -- setup/devices.sh@56 -- # : 00:04:34.030 20:11:04 -- setup/devices.sh@59 -- # local pci status 00:04:34.030 20:11:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.030 20:11:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:34.030 20:11:04 -- setup/devices.sh@47 -- # setup output config 00:04:34.030 20:11:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.030 20:11:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:34.289 20:11:04 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.289 20:11:04 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:34.289 20:11:04 -- setup/devices.sh@63 -- # found=1 00:04:34.289 20:11:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.289 20:11:04 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.289 20:11:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.548 20:11:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.548 20:11:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.548 20:11:04 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.548 20:11:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.548 20:11:04 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:34.548 20:11:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.115 20:11:05 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.115 20:11:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.115 20:11:05 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.115 20:11:05 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:35.115 20:11:05 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:35.115 20:11:05 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:35.115 20:11:05 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:35.115 20:11:05 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:35.374 20:11:05 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:35.374 20:11:05 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:35.374 20:11:05 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:35.374 20:11:05 -- setup/devices.sh@50 -- # local mount_point= 00:04:35.374 20:11:05 -- setup/devices.sh@51 -- # local test_file= 00:04:35.374 20:11:05 -- setup/devices.sh@53 -- # local found=0 00:04:35.374 20:11:05 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:35.374 20:11:05 -- setup/devices.sh@59 -- # local pci status 00:04:35.374 20:11:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.374 20:11:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:35.374 20:11:05 -- setup/devices.sh@47 -- # setup output config 00:04:35.374 20:11:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.374 20:11:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:35.633 20:11:05 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.633 20:11:05 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:35.633 20:11:05 -- setup/devices.sh@63 -- # found=1 00:04:35.633 20:11:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.633 20:11:05 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.633 20:11:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.892 20:11:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.892 20:11:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.892 20:11:06 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.892 20:11:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.892 20:11:06 -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:35.892 20:11:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.151 20:11:06 -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:36.151 20:11:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.410 20:11:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.410 20:11:06 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:36.410 20:11:06 -- setup/devices.sh@68 -- # return 0 00:04:36.410 20:11:06 -- setup/devices.sh@187 -- # cleanup_dm 00:04:36.410 20:11:06 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.411 20:11:06 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:36.411 20:11:06 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:36.670 20:11:06 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.670 20:11:06 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:36.670 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:36.670 20:11:06 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:36.670 20:11:06 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:36.670 00:04:36.670 real 0m5.858s 00:04:36.670 user 0m1.059s 00:04:36.670 sys 0m1.696s 00:04:36.670 ************************************ 00:04:36.670 END TEST dm_mount 00:04:36.670 ************************************ 00:04:36.670 20:11:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.670 20:11:06 -- common/autotest_common.sh@10 -- # set +x 00:04:36.670 20:11:06 -- setup/devices.sh@1 -- # cleanup 00:04:36.670 20:11:06 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:36.670 20:11:06 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.670 20:11:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.670 20:11:06 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:36.670 20:11:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:36.670 20:11:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:36.930 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:36.930 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:36.930 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:36.930 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:36.930 20:11:07 -- setup/devices.sh@12 -- # cleanup_dm 00:04:36.930 20:11:07 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.930 20:11:07 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:36.930 20:11:07 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.930 20:11:07 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:36.930 20:11:07 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:36.930 20:11:07 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:36.930 00:04:36.930 real 0m14.874s 00:04:36.930 user 0m3.791s 00:04:36.930 sys 0m5.322s 00:04:36.930 ************************************ 00:04:36.930 END TEST devices 00:04:36.930 ************************************ 00:04:36.930 20:11:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.930 20:11:07 -- common/autotest_common.sh@10 -- # set +x 00:04:36.930 00:04:36.930 real 0m52.492s 00:04:36.930 user 0m12.610s 00:04:36.930 sys 0m19.682s 00:04:36.930 20:11:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.930 20:11:07 -- common/autotest_common.sh@10 -- # set +x 00:04:36.930 ************************************ 00:04:36.930 END TEST setup.sh 00:04:36.930 ************************************ 00:04:36.930 20:11:07 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:37.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:38.126 Hugepages 00:04:38.126 node hugesize free / total 00:04:38.126 node0 1048576kB 0 / 0 00:04:38.384 node0 2048kB 2048 / 2048 00:04:38.384 00:04:38.384 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:38.384 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:38.384 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:38.643 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:38.643 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:38.902 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:38.902 20:11:08 -- spdk/autotest.sh@130 -- # uname -s 00:04:38.902 20:11:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:38.902 20:11:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:38.902 20:11:08 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.035 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.293 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.293 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.293 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.293 20:11:10 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:41.736 20:11:11 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:41.736 20:11:11 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:41.736 20:11:11 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:41.736 20:11:11 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:41.736 20:11:11 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:41.736 20:11:11 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:41.736 20:11:11 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.736 20:11:11 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:41.736 20:11:11 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:41.736 20:11:11 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:04:41.736 20:11:11 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:41.736 20:11:11 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.255 Waiting for block devices as requested 00:04:42.255 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:42.514 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:42.514 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:42.514 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:47.792 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:47.792 20:11:17 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:47.792 20:11:17 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:47.792 20:11:17 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:04:47.792 20:11:17 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:47.792 20:11:17 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:47.792 20:11:17 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:47.792 20:11:17 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:47.792 20:11:17 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:04:47.792 20:11:17 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:47.792 20:11:17 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:47.792 20:11:17 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:47.792 20:11:17 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:47.792 20:11:17 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:47.793 20:11:17 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:47.793 20:11:17 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:47.793 20:11:17 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:47.793 20:11:17 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:47.793 20:11:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:47.793 20:11:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:47.793 20:11:17 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:47.793 20:11:17 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:47.793 20:11:17 -- common/autotest_common.sh@1543 -- # continue 00:04:47.793 20:11:17 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:47.793 20:11:17 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:47.793 20:11:17 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:47.793 20:11:17 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:04:47.793 20:11:17 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:47.793 20:11:17 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:47.793 20:11:17 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:47.793 20:11:17 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:47.793 20:11:17 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:47.793 20:11:17 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:47.793 20:11:17 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:47.793 20:11:17 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:47.793 20:11:17 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:47.793 20:11:17 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:47.793 20:11:17 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:47.793 20:11:17 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:47.793 20:11:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:47.793 20:11:17 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:47.793 20:11:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:47.793 20:11:17 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:47.793 20:11:17 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:47.793 20:11:17 -- common/autotest_common.sh@1543 -- # continue 00:04:47.793 20:11:17 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:47.793 20:11:17 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:47.793 20:11:17 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:47.793 20:11:17 -- common/autotest_common.sh@1488 -- # grep 0000:00:12.0/nvme/nvme 00:04:47.793 20:11:17 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:47.793 20:11:17 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:47.793 20:11:17 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:47.793 20:11:17 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme2 00:04:47.793 20:11:17 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:47.793 20:11:17 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:47.793 20:11:17 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:47.793 20:11:17 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:47.793 20:11:17 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:47.793 20:11:17 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:47.793 20:11:17 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:47.793 20:11:17 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:47.793 20:11:17 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:47.793 20:11:17 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:47.793 20:11:17 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:47.793 20:11:17 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:47.793 20:11:17 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:47.793 20:11:17 -- common/autotest_common.sh@1543 -- # continue 00:04:47.793 20:11:17 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:47.793 20:11:17 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:47.793 20:11:17 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:47.793 20:11:17 -- common/autotest_common.sh@1488 -- # grep 0000:00:13.0/nvme/nvme 00:04:47.793 20:11:17 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:47.793 20:11:17 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:47.793 20:11:17 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:47.793 20:11:17 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme3 00:04:47.793 20:11:17 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:47.793 20:11:17 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:47.793 20:11:17 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:47.793 20:11:17 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:47.793 20:11:17 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:47.793 20:11:18 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:47.793 20:11:18 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:47.793 20:11:18 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:47.793 20:11:18 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:47.793 20:11:18 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:47.793 20:11:18 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:47.793 20:11:18 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:47.793 20:11:18 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:47.793 20:11:18 -- common/autotest_common.sh@1543 -- # continue 00:04:47.793 20:11:18 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:47.793 20:11:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:47.793 20:11:18 -- common/autotest_common.sh@10 -- # set +x 00:04:48.051 20:11:18 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:48.051 20:11:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:48.051 20:11:18 -- common/autotest_common.sh@10 -- # set +x 00:04:48.051 20:11:18 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:48.618 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.552 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.552 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.552 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.552 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.552 20:11:19 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:49.552 20:11:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:49.552 20:11:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.552 20:11:19 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:49.552 20:11:19 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:49.552 20:11:19 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:49.552 20:11:19 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:49.552 20:11:19 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:49.552 20:11:19 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:49.552 20:11:19 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:49.552 20:11:19 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:49.552 20:11:19 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.552 20:11:19 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:49.552 20:11:19 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:49.816 20:11:19 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:04:49.816 20:11:19 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:49.816 20:11:19 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:49.816 20:11:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:49.816 20:11:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:49.816 20:11:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:49.816 20:11:19 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:49.816 20:11:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:49.816 20:11:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:49.816 20:11:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:49.816 20:11:19 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:49.816 20:11:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:49.816 20:11:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:49.816 20:11:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:49.816 20:11:19 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:49.816 20:11:19 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:49.816 20:11:19 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:49.816 20:11:19 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:49.816 20:11:19 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:04:49.816 20:11:19 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:04:49.816 20:11:19 -- common/autotest_common.sh@1579 -- # return 0 00:04:49.816 20:11:19 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:49.816 20:11:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:49.816 20:11:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:49.816 20:11:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:49.816 20:11:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:49.816 20:11:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:49.816 20:11:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.816 20:11:19 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:49.816 20:11:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.816 20:11:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.816 20:11:19 -- common/autotest_common.sh@10 -- # set +x 00:04:49.816 ************************************ 00:04:49.816 START TEST env 00:04:49.816 ************************************ 00:04:49.816 20:11:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:50.084 * Looking for test storage... 00:04:50.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:50.084 20:11:20 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:50.084 20:11:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.085 20:11:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.085 20:11:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.085 ************************************ 00:04:50.085 START TEST env_memory 00:04:50.085 ************************************ 00:04:50.085 20:11:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:50.085 00:04:50.085 00:04:50.085 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.085 http://cunit.sourceforge.net/ 00:04:50.085 00:04:50.085 00:04:50.085 Suite: memory 00:04:50.085 Test: alloc and free memory map ...[2024-04-24 20:11:20.274495] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:50.085 passed 00:04:50.343 Test: mem map translation ...[2024-04-24 20:11:20.320364] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:50.343 [2024-04-24 20:11:20.320439] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:50.343 [2024-04-24 20:11:20.320514] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:50.343 [2024-04-24 20:11:20.320541] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:50.343 passed 00:04:50.343 Test: mem map registration ...[2024-04-24 20:11:20.391590] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:50.343 [2024-04-24 20:11:20.391688] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:50.343 passed 00:04:50.343 Test: mem map adjacent registrations ...passed 00:04:50.343 00:04:50.343 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.343 suites 1 1 n/a 0 0 00:04:50.343 tests 4 4 4 0 0 00:04:50.343 asserts 152 152 152 0 n/a 00:04:50.343 00:04:50.343 Elapsed time = 0.252 seconds 00:04:50.343 00:04:50.343 real 0m0.298s 00:04:50.343 user 0m0.263s 00:04:50.343 sys 0m0.031s 00:04:50.343 20:11:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:50.343 20:11:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.343 ************************************ 00:04:50.343 END TEST env_memory 00:04:50.343 ************************************ 00:04:50.343 20:11:20 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:50.343 20:11:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:50.343 20:11:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:50.343 20:11:20 -- common/autotest_common.sh@10 -- # set +x 00:04:50.601 ************************************ 00:04:50.601 START TEST env_vtophys 00:04:50.601 ************************************ 00:04:50.601 20:11:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:50.601 EAL: lib.eal log level changed from notice to debug 00:04:50.601 EAL: Detected lcore 0 as core 0 on socket 0 00:04:50.601 EAL: Detected lcore 1 as core 0 on socket 0 00:04:50.601 EAL: Detected lcore 2 as core 0 on socket 0 00:04:50.601 EAL: Detected lcore 3 as core 0 on socket 0 00:04:50.601 EAL: Detected lcore 4 as core 0 on socket 0 00:04:50.601 EAL: Detected lcore 5 as core 0 on socket 0 00:04:50.601 EAL: Detected lcore 6 as core 0 on socket 0 00:04:50.601 EAL: Detected lcore 7 as core 0 on socket 0 00:04:50.601 EAL: Detected lcore 8 as core 0 on socket 0 00:04:50.601 EAL: Detected lcore 9 as core 0 on socket 0 00:04:50.601 EAL: Maximum logical cores by configuration: 128 00:04:50.601 EAL: Detected CPU lcores: 10 00:04:50.601 EAL: Detected NUMA nodes: 1 00:04:50.601 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:50.601 EAL: Detected shared linkage of DPDK 00:04:50.601 EAL: No shared files mode enabled, IPC will be disabled 00:04:50.601 EAL: Selected IOVA mode 'PA' 00:04:50.601 EAL: Probing VFIO support... 00:04:50.601 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:50.601 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:50.601 EAL: Ask a virtual area of 0x2e000 bytes 00:04:50.601 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:50.601 EAL: Setting up physically contiguous memory... 00:04:50.601 EAL: Setting maximum number of open files to 524288 00:04:50.601 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:50.601 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:50.601 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.601 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:50.601 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.601 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.601 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:50.601 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:50.601 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.601 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:50.602 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.602 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.602 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:50.602 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:50.602 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.602 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:50.602 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.602 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.602 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:50.602 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:50.602 EAL: Ask a virtual area of 0x61000 bytes 00:04:50.602 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:50.602 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:50.602 EAL: Ask a virtual area of 0x400000000 bytes 00:04:50.602 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:50.602 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:50.602 EAL: Hugepages will be freed exactly as allocated. 00:04:50.602 EAL: No shared files mode enabled, IPC is disabled 00:04:50.602 EAL: No shared files mode enabled, IPC is disabled 00:04:50.861 EAL: TSC frequency is ~2490000 KHz 00:04:50.861 EAL: Main lcore 0 is ready (tid=7fe01b615a40;cpuset=[0]) 00:04:50.861 EAL: Trying to obtain current memory policy. 00:04:50.861 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.861 EAL: Restoring previous memory policy: 0 00:04:50.861 EAL: request: mp_malloc_sync 00:04:50.861 EAL: No shared files mode enabled, IPC is disabled 00:04:50.861 EAL: Heap on socket 0 was expanded by 2MB 00:04:50.861 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:50.861 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:50.861 EAL: Mem event callback 'spdk:(nil)' registered 00:04:50.861 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:50.861 00:04:50.861 00:04:50.861 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.861 http://cunit.sourceforge.net/ 00:04:50.861 00:04:50.861 00:04:50.861 Suite: components_suite 00:04:51.428 Test: vtophys_malloc_test ...passed 00:04:51.428 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:51.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.428 EAL: Restoring previous memory policy: 4 00:04:51.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.428 EAL: request: mp_malloc_sync 00:04:51.428 EAL: No shared files mode enabled, IPC is disabled 00:04:51.428 EAL: Heap on socket 0 was expanded by 4MB 00:04:51.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.428 EAL: request: mp_malloc_sync 00:04:51.428 EAL: No shared files mode enabled, IPC is disabled 00:04:51.428 EAL: Heap on socket 0 was shrunk by 4MB 00:04:51.428 EAL: Trying to obtain current memory policy. 00:04:51.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.428 EAL: Restoring previous memory policy: 4 00:04:51.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.428 EAL: request: mp_malloc_sync 00:04:51.428 EAL: No shared files mode enabled, IPC is disabled 00:04:51.428 EAL: Heap on socket 0 was expanded by 6MB 00:04:51.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.428 EAL: request: mp_malloc_sync 00:04:51.428 EAL: No shared files mode enabled, IPC is disabled 00:04:51.428 EAL: Heap on socket 0 was shrunk by 6MB 00:04:51.428 EAL: Trying to obtain current memory policy. 00:04:51.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.428 EAL: Restoring previous memory policy: 4 00:04:51.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.428 EAL: request: mp_malloc_sync 00:04:51.428 EAL: No shared files mode enabled, IPC is disabled 00:04:51.428 EAL: Heap on socket 0 was expanded by 10MB 00:04:51.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.428 EAL: request: mp_malloc_sync 00:04:51.428 EAL: No shared files mode enabled, IPC is disabled 00:04:51.428 EAL: Heap on socket 0 was shrunk by 10MB 00:04:51.428 EAL: Trying to obtain current memory policy. 00:04:51.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.428 EAL: Restoring previous memory policy: 4 00:04:51.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.428 EAL: request: mp_malloc_sync 00:04:51.428 EAL: No shared files mode enabled, IPC is disabled 00:04:51.428 EAL: Heap on socket 0 was expanded by 18MB 00:04:51.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.428 EAL: request: mp_malloc_sync 00:04:51.428 EAL: No shared files mode enabled, IPC is disabled 00:04:51.428 EAL: Heap on socket 0 was shrunk by 18MB 00:04:51.428 EAL: Trying to obtain current memory policy. 00:04:51.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.428 EAL: Restoring previous memory policy: 4 00:04:51.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.428 EAL: request: mp_malloc_sync 00:04:51.428 EAL: No shared files mode enabled, IPC is disabled 00:04:51.428 EAL: Heap on socket 0 was expanded by 34MB 00:04:51.428 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.428 EAL: request: mp_malloc_sync 00:04:51.428 EAL: No shared files mode enabled, IPC is disabled 00:04:51.428 EAL: Heap on socket 0 was shrunk by 34MB 00:04:51.688 EAL: Trying to obtain current memory policy. 00:04:51.688 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.688 EAL: Restoring previous memory policy: 4 00:04:51.688 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.688 EAL: request: mp_malloc_sync 00:04:51.688 EAL: No shared files mode enabled, IPC is disabled 00:04:51.688 EAL: Heap on socket 0 was expanded by 66MB 00:04:51.688 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.688 EAL: request: mp_malloc_sync 00:04:51.688 EAL: No shared files mode enabled, IPC is disabled 00:04:51.688 EAL: Heap on socket 0 was shrunk by 66MB 00:04:51.948 EAL: Trying to obtain current memory policy. 00:04:51.948 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:51.948 EAL: Restoring previous memory policy: 4 00:04:51.948 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.948 EAL: request: mp_malloc_sync 00:04:51.948 EAL: No shared files mode enabled, IPC is disabled 00:04:51.948 EAL: Heap on socket 0 was expanded by 130MB 00:04:52.207 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.207 EAL: request: mp_malloc_sync 00:04:52.207 EAL: No shared files mode enabled, IPC is disabled 00:04:52.207 EAL: Heap on socket 0 was shrunk by 130MB 00:04:52.467 EAL: Trying to obtain current memory policy. 00:04:52.467 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.467 EAL: Restoring previous memory policy: 4 00:04:52.467 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.467 EAL: request: mp_malloc_sync 00:04:52.467 EAL: No shared files mode enabled, IPC is disabled 00:04:52.467 EAL: Heap on socket 0 was expanded by 258MB 00:04:53.036 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.036 EAL: request: mp_malloc_sync 00:04:53.036 EAL: No shared files mode enabled, IPC is disabled 00:04:53.036 EAL: Heap on socket 0 was shrunk by 258MB 00:04:53.603 EAL: Trying to obtain current memory policy. 00:04:53.603 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:53.603 EAL: Restoring previous memory policy: 4 00:04:53.604 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.604 EAL: request: mp_malloc_sync 00:04:53.604 EAL: No shared files mode enabled, IPC is disabled 00:04:53.604 EAL: Heap on socket 0 was expanded by 514MB 00:04:54.997 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.997 EAL: request: mp_malloc_sync 00:04:54.997 EAL: No shared files mode enabled, IPC is disabled 00:04:54.997 EAL: Heap on socket 0 was shrunk by 514MB 00:04:55.579 EAL: Trying to obtain current memory policy. 00:04:55.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:55.839 EAL: Restoring previous memory policy: 4 00:04:55.839 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.839 EAL: request: mp_malloc_sync 00:04:55.839 EAL: No shared files mode enabled, IPC is disabled 00:04:55.839 EAL: Heap on socket 0 was expanded by 1026MB 00:04:58.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:58.373 EAL: request: mp_malloc_sync 00:04:58.373 EAL: No shared files mode enabled, IPC is disabled 00:04:58.373 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:00.279 passed 00:05:00.279 00:05:00.279 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.279 suites 1 1 n/a 0 0 00:05:00.279 tests 2 2 2 0 0 00:05:00.279 asserts 5362 5362 5362 0 n/a 00:05:00.279 00:05:00.279 Elapsed time = 9.116 seconds 00:05:00.279 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.279 EAL: request: mp_malloc_sync 00:05:00.279 EAL: No shared files mode enabled, IPC is disabled 00:05:00.279 EAL: Heap on socket 0 was shrunk by 2MB 00:05:00.279 EAL: No shared files mode enabled, IPC is disabled 00:05:00.279 EAL: No shared files mode enabled, IPC is disabled 00:05:00.279 EAL: No shared files mode enabled, IPC is disabled 00:05:00.280 00:05:00.280 real 0m9.427s 00:05:00.280 user 0m8.367s 00:05:00.280 sys 0m0.900s 00:05:00.280 20:11:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.280 20:11:30 -- common/autotest_common.sh@10 -- # set +x 00:05:00.280 ************************************ 00:05:00.280 END TEST env_vtophys 00:05:00.280 ************************************ 00:05:00.280 20:11:30 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:00.280 20:11:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.280 20:11:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.280 20:11:30 -- common/autotest_common.sh@10 -- # set +x 00:05:00.280 ************************************ 00:05:00.280 START TEST env_pci 00:05:00.280 ************************************ 00:05:00.280 20:11:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:00.280 00:05:00.280 00:05:00.280 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.280 http://cunit.sourceforge.net/ 00:05:00.280 00:05:00.280 00:05:00.280 Suite: pci 00:05:00.280 Test: pci_hook ...[2024-04-24 20:11:30.281677] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 61426 has claimed it 00:05:00.280 EAL: Cannot find device (10000:00:01.0) 00:05:00.280 EAL: Failed to attach device on primary process 00:05:00.280 passed 00:05:00.280 00:05:00.280 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.280 suites 1 1 n/a 0 0 00:05:00.280 tests 1 1 1 0 0 00:05:00.280 asserts 25 25 25 0 n/a 00:05:00.280 00:05:00.280 Elapsed time = 0.012 seconds 00:05:00.280 00:05:00.280 real 0m0.117s 00:05:00.280 user 0m0.041s 00:05:00.280 sys 0m0.075s 00:05:00.280 20:11:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.280 20:11:30 -- common/autotest_common.sh@10 -- # set +x 00:05:00.280 ************************************ 00:05:00.280 END TEST env_pci 00:05:00.280 ************************************ 00:05:00.280 20:11:30 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:00.280 20:11:30 -- env/env.sh@15 -- # uname 00:05:00.280 20:11:30 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:00.280 20:11:30 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:00.280 20:11:30 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.280 20:11:30 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:00.280 20:11:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.280 20:11:30 -- common/autotest_common.sh@10 -- # set +x 00:05:00.544 ************************************ 00:05:00.544 START TEST env_dpdk_post_init 00:05:00.544 ************************************ 00:05:00.544 20:11:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.544 EAL: Detected CPU lcores: 10 00:05:00.544 EAL: Detected NUMA nodes: 1 00:05:00.544 EAL: Detected shared linkage of DPDK 00:05:00.544 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.544 EAL: Selected IOVA mode 'PA' 00:05:00.544 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:00.544 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:00.544 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:00.544 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:00.544 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:00.811 Starting DPDK initialization... 00:05:00.812 Starting SPDK post initialization... 00:05:00.812 SPDK NVMe probe 00:05:00.812 Attaching to 0000:00:10.0 00:05:00.812 Attaching to 0000:00:11.0 00:05:00.812 Attaching to 0000:00:12.0 00:05:00.812 Attaching to 0000:00:13.0 00:05:00.812 Attached to 0000:00:10.0 00:05:00.812 Attached to 0000:00:11.0 00:05:00.812 Attached to 0000:00:13.0 00:05:00.812 Attached to 0000:00:12.0 00:05:00.812 Cleaning up... 00:05:00.812 00:05:00.812 real 0m0.303s 00:05:00.812 user 0m0.095s 00:05:00.812 sys 0m0.111s 00:05:00.812 20:11:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:00.812 ************************************ 00:05:00.812 END TEST env_dpdk_post_init 00:05:00.812 ************************************ 00:05:00.812 20:11:30 -- common/autotest_common.sh@10 -- # set +x 00:05:00.812 20:11:30 -- env/env.sh@26 -- # uname 00:05:00.812 20:11:30 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:00.812 20:11:30 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.812 20:11:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.812 20:11:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.812 20:11:30 -- common/autotest_common.sh@10 -- # set +x 00:05:00.812 ************************************ 00:05:00.812 START TEST env_mem_callbacks 00:05:00.812 ************************************ 00:05:00.812 20:11:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:00.812 EAL: Detected CPU lcores: 10 00:05:00.812 EAL: Detected NUMA nodes: 1 00:05:00.812 EAL: Detected shared linkage of DPDK 00:05:01.072 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.072 EAL: Selected IOVA mode 'PA' 00:05:01.072 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.072 00:05:01.072 00:05:01.072 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.072 http://cunit.sourceforge.net/ 00:05:01.072 00:05:01.072 00:05:01.072 Suite: memory 00:05:01.072 Test: test ... 00:05:01.072 register 0x200000200000 2097152 00:05:01.072 malloc 3145728 00:05:01.072 register 0x200000400000 4194304 00:05:01.072 buf 0x2000004fffc0 len 3145728 PASSED 00:05:01.072 malloc 64 00:05:01.072 buf 0x2000004ffec0 len 64 PASSED 00:05:01.072 malloc 4194304 00:05:01.072 register 0x200000800000 6291456 00:05:01.072 buf 0x2000009fffc0 len 4194304 PASSED 00:05:01.072 free 0x2000004fffc0 3145728 00:05:01.072 free 0x2000004ffec0 64 00:05:01.072 unregister 0x200000400000 4194304 PASSED 00:05:01.072 free 0x2000009fffc0 4194304 00:05:01.072 unregister 0x200000800000 6291456 PASSED 00:05:01.072 malloc 8388608 00:05:01.072 register 0x200000400000 10485760 00:05:01.072 buf 0x2000005fffc0 len 8388608 PASSED 00:05:01.072 free 0x2000005fffc0 8388608 00:05:01.072 unregister 0x200000400000 10485760 PASSED 00:05:01.072 passed 00:05:01.072 00:05:01.072 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.072 suites 1 1 n/a 0 0 00:05:01.072 tests 1 1 1 0 0 00:05:01.072 asserts 15 15 15 0 n/a 00:05:01.072 00:05:01.072 Elapsed time = 0.085 seconds 00:05:01.072 00:05:01.072 real 0m0.292s 00:05:01.072 user 0m0.115s 00:05:01.072 sys 0m0.076s 00:05:01.072 20:11:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:01.072 ************************************ 00:05:01.072 END TEST env_mem_callbacks 00:05:01.072 ************************************ 00:05:01.072 20:11:31 -- common/autotest_common.sh@10 -- # set +x 00:05:01.330 00:05:01.330 real 0m11.350s 00:05:01.330 user 0m9.183s 00:05:01.330 sys 0m1.723s 00:05:01.330 20:11:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:01.330 ************************************ 00:05:01.330 END TEST env 00:05:01.330 ************************************ 00:05:01.330 20:11:31 -- common/autotest_common.sh@10 -- # set +x 00:05:01.330 20:11:31 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.330 20:11:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:01.330 20:11:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:01.330 20:11:31 -- common/autotest_common.sh@10 -- # set +x 00:05:01.330 ************************************ 00:05:01.330 START TEST rpc 00:05:01.330 ************************************ 00:05:01.330 20:11:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.590 * Looking for test storage... 00:05:01.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.590 20:11:31 -- rpc/rpc.sh@65 -- # spdk_pid=61563 00:05:01.590 20:11:31 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:01.590 20:11:31 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.590 20:11:31 -- rpc/rpc.sh@67 -- # waitforlisten 61563 00:05:01.590 20:11:31 -- common/autotest_common.sh@817 -- # '[' -z 61563 ']' 00:05:01.590 20:11:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.590 20:11:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:01.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.590 20:11:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.590 20:11:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:01.590 20:11:31 -- common/autotest_common.sh@10 -- # set +x 00:05:01.590 [2024-04-24 20:11:31.747773] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:05:01.590 [2024-04-24 20:11:31.747963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61563 ] 00:05:01.849 [2024-04-24 20:11:31.934940] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.108 [2024-04-24 20:11:32.175851] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:02.108 [2024-04-24 20:11:32.175922] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 61563' to capture a snapshot of events at runtime. 00:05:02.108 [2024-04-24 20:11:32.175936] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:02.108 [2024-04-24 20:11:32.175950] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:02.108 [2024-04-24 20:11:32.175960] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid61563 for offline analysis/debug. 00:05:02.109 [2024-04-24 20:11:32.176008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.045 20:11:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:03.045 20:11:33 -- common/autotest_common.sh@850 -- # return 0 00:05:03.045 20:11:33 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:03.045 20:11:33 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:03.045 20:11:33 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:03.045 20:11:33 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:03.045 20:11:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.045 20:11:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.046 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.046 ************************************ 00:05:03.046 START TEST rpc_integrity 00:05:03.046 ************************************ 00:05:03.046 20:11:33 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:03.046 20:11:33 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.046 20:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.046 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.305 20:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.305 20:11:33 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.305 20:11:33 -- rpc/rpc.sh@13 -- # jq length 00:05:03.305 20:11:33 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.305 20:11:33 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.305 20:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.305 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.305 20:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.305 20:11:33 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:03.305 20:11:33 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.305 20:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.305 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.305 20:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.305 20:11:33 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.305 { 00:05:03.305 "name": "Malloc0", 00:05:03.305 "aliases": [ 00:05:03.305 "2d902b23-bbb3-48ee-85f8-0bdaa24a75e4" 00:05:03.305 ], 00:05:03.305 "product_name": "Malloc disk", 00:05:03.305 "block_size": 512, 00:05:03.305 "num_blocks": 16384, 00:05:03.305 "uuid": "2d902b23-bbb3-48ee-85f8-0bdaa24a75e4", 00:05:03.305 "assigned_rate_limits": { 00:05:03.305 "rw_ios_per_sec": 0, 00:05:03.305 "rw_mbytes_per_sec": 0, 00:05:03.305 "r_mbytes_per_sec": 0, 00:05:03.305 "w_mbytes_per_sec": 0 00:05:03.305 }, 00:05:03.305 "claimed": false, 00:05:03.305 "zoned": false, 00:05:03.305 "supported_io_types": { 00:05:03.305 "read": true, 00:05:03.305 "write": true, 00:05:03.305 "unmap": true, 00:05:03.305 "write_zeroes": true, 00:05:03.305 "flush": true, 00:05:03.305 "reset": true, 00:05:03.305 "compare": false, 00:05:03.305 "compare_and_write": false, 00:05:03.305 "abort": true, 00:05:03.305 "nvme_admin": false, 00:05:03.305 "nvme_io": false 00:05:03.305 }, 00:05:03.305 "memory_domains": [ 00:05:03.305 { 00:05:03.305 "dma_device_id": "system", 00:05:03.305 "dma_device_type": 1 00:05:03.305 }, 00:05:03.305 { 00:05:03.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.305 "dma_device_type": 2 00:05:03.305 } 00:05:03.305 ], 00:05:03.305 "driver_specific": {} 00:05:03.305 } 00:05:03.305 ]' 00:05:03.305 20:11:33 -- rpc/rpc.sh@17 -- # jq length 00:05:03.305 20:11:33 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.305 20:11:33 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:03.305 20:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.305 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.305 [2024-04-24 20:11:33.424061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:03.305 [2024-04-24 20:11:33.424129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.305 [2024-04-24 20:11:33.424155] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:05:03.305 [2024-04-24 20:11:33.424170] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.305 [2024-04-24 20:11:33.426614] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.305 [2024-04-24 20:11:33.426662] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.305 Passthru0 00:05:03.305 20:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.305 20:11:33 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.305 20:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.305 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.305 20:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.305 20:11:33 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.305 { 00:05:03.305 "name": "Malloc0", 00:05:03.305 "aliases": [ 00:05:03.305 "2d902b23-bbb3-48ee-85f8-0bdaa24a75e4" 00:05:03.305 ], 00:05:03.305 "product_name": "Malloc disk", 00:05:03.305 "block_size": 512, 00:05:03.305 "num_blocks": 16384, 00:05:03.305 "uuid": "2d902b23-bbb3-48ee-85f8-0bdaa24a75e4", 00:05:03.305 "assigned_rate_limits": { 00:05:03.305 "rw_ios_per_sec": 0, 00:05:03.305 "rw_mbytes_per_sec": 0, 00:05:03.305 "r_mbytes_per_sec": 0, 00:05:03.305 "w_mbytes_per_sec": 0 00:05:03.305 }, 00:05:03.305 "claimed": true, 00:05:03.305 "claim_type": "exclusive_write", 00:05:03.305 "zoned": false, 00:05:03.305 "supported_io_types": { 00:05:03.305 "read": true, 00:05:03.305 "write": true, 00:05:03.305 "unmap": true, 00:05:03.305 "write_zeroes": true, 00:05:03.305 "flush": true, 00:05:03.305 "reset": true, 00:05:03.305 "compare": false, 00:05:03.305 "compare_and_write": false, 00:05:03.305 "abort": true, 00:05:03.305 "nvme_admin": false, 00:05:03.305 "nvme_io": false 00:05:03.305 }, 00:05:03.305 "memory_domains": [ 00:05:03.305 { 00:05:03.305 "dma_device_id": "system", 00:05:03.305 "dma_device_type": 1 00:05:03.305 }, 00:05:03.306 { 00:05:03.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.306 "dma_device_type": 2 00:05:03.306 } 00:05:03.306 ], 00:05:03.306 "driver_specific": {} 00:05:03.306 }, 00:05:03.306 { 00:05:03.306 "name": "Passthru0", 00:05:03.306 "aliases": [ 00:05:03.306 "90c3d168-55c3-5c07-96a8-3e4526c6bdd2" 00:05:03.306 ], 00:05:03.306 "product_name": "passthru", 00:05:03.306 "block_size": 512, 00:05:03.306 "num_blocks": 16384, 00:05:03.306 "uuid": "90c3d168-55c3-5c07-96a8-3e4526c6bdd2", 00:05:03.306 "assigned_rate_limits": { 00:05:03.306 "rw_ios_per_sec": 0, 00:05:03.306 "rw_mbytes_per_sec": 0, 00:05:03.306 "r_mbytes_per_sec": 0, 00:05:03.306 "w_mbytes_per_sec": 0 00:05:03.306 }, 00:05:03.306 "claimed": false, 00:05:03.306 "zoned": false, 00:05:03.306 "supported_io_types": { 00:05:03.306 "read": true, 00:05:03.306 "write": true, 00:05:03.306 "unmap": true, 00:05:03.306 "write_zeroes": true, 00:05:03.306 "flush": true, 00:05:03.306 "reset": true, 00:05:03.306 "compare": false, 00:05:03.306 "compare_and_write": false, 00:05:03.306 "abort": true, 00:05:03.306 "nvme_admin": false, 00:05:03.306 "nvme_io": false 00:05:03.306 }, 00:05:03.306 "memory_domains": [ 00:05:03.306 { 00:05:03.306 "dma_device_id": "system", 00:05:03.306 "dma_device_type": 1 00:05:03.306 }, 00:05:03.306 { 00:05:03.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.306 "dma_device_type": 2 00:05:03.306 } 00:05:03.306 ], 00:05:03.306 "driver_specific": { 00:05:03.306 "passthru": { 00:05:03.306 "name": "Passthru0", 00:05:03.306 "base_bdev_name": "Malloc0" 00:05:03.306 } 00:05:03.306 } 00:05:03.306 } 00:05:03.306 ]' 00:05:03.306 20:11:33 -- rpc/rpc.sh@21 -- # jq length 00:05:03.306 20:11:33 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.306 20:11:33 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.306 20:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.306 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.306 20:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.306 20:11:33 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:03.306 20:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.306 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.564 20:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.564 20:11:33 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.564 20:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.564 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.564 20:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.564 20:11:33 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.564 20:11:33 -- rpc/rpc.sh@26 -- # jq length 00:05:03.564 ************************************ 00:05:03.564 END TEST rpc_integrity 00:05:03.564 ************************************ 00:05:03.564 20:11:33 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.564 00:05:03.564 real 0m0.347s 00:05:03.564 user 0m0.194s 00:05:03.564 sys 0m0.055s 00:05:03.564 20:11:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:03.564 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.564 20:11:33 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:03.564 20:11:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.564 20:11:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.564 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.564 ************************************ 00:05:03.564 START TEST rpc_plugins 00:05:03.564 ************************************ 00:05:03.564 20:11:33 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:05:03.564 20:11:33 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:03.564 20:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.564 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.564 20:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.564 20:11:33 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:03.564 20:11:33 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:03.564 20:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.564 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.822 20:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.822 20:11:33 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:03.822 { 00:05:03.822 "name": "Malloc1", 00:05:03.822 "aliases": [ 00:05:03.822 "d87cdb00-34ba-4bfb-b744-6cf6b8f87404" 00:05:03.822 ], 00:05:03.822 "product_name": "Malloc disk", 00:05:03.822 "block_size": 4096, 00:05:03.822 "num_blocks": 256, 00:05:03.822 "uuid": "d87cdb00-34ba-4bfb-b744-6cf6b8f87404", 00:05:03.822 "assigned_rate_limits": { 00:05:03.822 "rw_ios_per_sec": 0, 00:05:03.822 "rw_mbytes_per_sec": 0, 00:05:03.822 "r_mbytes_per_sec": 0, 00:05:03.822 "w_mbytes_per_sec": 0 00:05:03.822 }, 00:05:03.822 "claimed": false, 00:05:03.822 "zoned": false, 00:05:03.822 "supported_io_types": { 00:05:03.822 "read": true, 00:05:03.822 "write": true, 00:05:03.822 "unmap": true, 00:05:03.822 "write_zeroes": true, 00:05:03.822 "flush": true, 00:05:03.822 "reset": true, 00:05:03.822 "compare": false, 00:05:03.822 "compare_and_write": false, 00:05:03.822 "abort": true, 00:05:03.822 "nvme_admin": false, 00:05:03.822 "nvme_io": false 00:05:03.822 }, 00:05:03.822 "memory_domains": [ 00:05:03.822 { 00:05:03.822 "dma_device_id": "system", 00:05:03.822 "dma_device_type": 1 00:05:03.822 }, 00:05:03.822 { 00:05:03.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.822 "dma_device_type": 2 00:05:03.822 } 00:05:03.822 ], 00:05:03.822 "driver_specific": {} 00:05:03.822 } 00:05:03.822 ]' 00:05:03.823 20:11:33 -- rpc/rpc.sh@32 -- # jq length 00:05:03.823 20:11:33 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:03.823 20:11:33 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:03.823 20:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.823 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.823 20:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.823 20:11:33 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:03.823 20:11:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:03.823 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.823 20:11:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:03.823 20:11:33 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:03.823 20:11:33 -- rpc/rpc.sh@36 -- # jq length 00:05:03.823 ************************************ 00:05:03.823 END TEST rpc_plugins 00:05:03.823 ************************************ 00:05:03.823 20:11:33 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:03.823 00:05:03.823 real 0m0.162s 00:05:03.823 user 0m0.091s 00:05:03.823 sys 0m0.028s 00:05:03.823 20:11:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:03.823 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:03.823 20:11:33 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:03.823 20:11:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.823 20:11:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.823 20:11:33 -- common/autotest_common.sh@10 -- # set +x 00:05:04.082 ************************************ 00:05:04.082 START TEST rpc_trace_cmd_test 00:05:04.082 ************************************ 00:05:04.082 20:11:34 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:05:04.082 20:11:34 -- rpc/rpc.sh@40 -- # local info 00:05:04.082 20:11:34 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:04.082 20:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.082 20:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.082 20:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.082 20:11:34 -- rpc/rpc.sh@42 -- # info='{ 00:05:04.082 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid61563", 00:05:04.082 "tpoint_group_mask": "0x8", 00:05:04.082 "iscsi_conn": { 00:05:04.082 "mask": "0x2", 00:05:04.082 "tpoint_mask": "0x0" 00:05:04.082 }, 00:05:04.082 "scsi": { 00:05:04.082 "mask": "0x4", 00:05:04.082 "tpoint_mask": "0x0" 00:05:04.082 }, 00:05:04.082 "bdev": { 00:05:04.082 "mask": "0x8", 00:05:04.082 "tpoint_mask": "0xffffffffffffffff" 00:05:04.082 }, 00:05:04.082 "nvmf_rdma": { 00:05:04.082 "mask": "0x10", 00:05:04.082 "tpoint_mask": "0x0" 00:05:04.082 }, 00:05:04.082 "nvmf_tcp": { 00:05:04.082 "mask": "0x20", 00:05:04.082 "tpoint_mask": "0x0" 00:05:04.082 }, 00:05:04.082 "ftl": { 00:05:04.082 "mask": "0x40", 00:05:04.082 "tpoint_mask": "0x0" 00:05:04.082 }, 00:05:04.082 "blobfs": { 00:05:04.082 "mask": "0x80", 00:05:04.082 "tpoint_mask": "0x0" 00:05:04.082 }, 00:05:04.082 "dsa": { 00:05:04.082 "mask": "0x200", 00:05:04.082 "tpoint_mask": "0x0" 00:05:04.082 }, 00:05:04.082 "thread": { 00:05:04.082 "mask": "0x400", 00:05:04.082 "tpoint_mask": "0x0" 00:05:04.082 }, 00:05:04.082 "nvme_pcie": { 00:05:04.082 "mask": "0x800", 00:05:04.082 "tpoint_mask": "0x0" 00:05:04.082 }, 00:05:04.082 "iaa": { 00:05:04.082 "mask": "0x1000", 00:05:04.082 "tpoint_mask": "0x0" 00:05:04.082 }, 00:05:04.082 "nvme_tcp": { 00:05:04.082 "mask": "0x2000", 00:05:04.082 "tpoint_mask": "0x0" 00:05:04.082 }, 00:05:04.082 "bdev_nvme": { 00:05:04.082 "mask": "0x4000", 00:05:04.082 "tpoint_mask": "0x0" 00:05:04.082 }, 00:05:04.082 "sock": { 00:05:04.082 "mask": "0x8000", 00:05:04.082 "tpoint_mask": "0x0" 00:05:04.082 } 00:05:04.082 }' 00:05:04.082 20:11:34 -- rpc/rpc.sh@43 -- # jq length 00:05:04.082 20:11:34 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:04.082 20:11:34 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:04.082 20:11:34 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:04.082 20:11:34 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:04.082 20:11:34 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:04.082 20:11:34 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:04.082 20:11:34 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:04.082 20:11:34 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:04.082 ************************************ 00:05:04.082 END TEST rpc_trace_cmd_test 00:05:04.082 ************************************ 00:05:04.082 20:11:34 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:04.082 00:05:04.082 real 0m0.240s 00:05:04.082 user 0m0.192s 00:05:04.082 sys 0m0.036s 00:05:04.082 20:11:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.082 20:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.341 20:11:34 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:04.341 20:11:34 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:04.341 20:11:34 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:04.341 20:11:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:04.341 20:11:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:04.341 20:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.341 ************************************ 00:05:04.341 START TEST rpc_daemon_integrity 00:05:04.341 ************************************ 00:05:04.341 20:11:34 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:05:04.341 20:11:34 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:04.341 20:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.341 20:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.341 20:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.341 20:11:34 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:04.341 20:11:34 -- rpc/rpc.sh@13 -- # jq length 00:05:04.341 20:11:34 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:04.341 20:11:34 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:04.341 20:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.341 20:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.341 20:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.341 20:11:34 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:04.341 20:11:34 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:04.341 20:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.341 20:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.341 20:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.341 20:11:34 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:04.341 { 00:05:04.341 "name": "Malloc2", 00:05:04.341 "aliases": [ 00:05:04.341 "1a6a05c1-54a2-46e5-9391-4e381d812640" 00:05:04.341 ], 00:05:04.341 "product_name": "Malloc disk", 00:05:04.341 "block_size": 512, 00:05:04.341 "num_blocks": 16384, 00:05:04.341 "uuid": "1a6a05c1-54a2-46e5-9391-4e381d812640", 00:05:04.341 "assigned_rate_limits": { 00:05:04.341 "rw_ios_per_sec": 0, 00:05:04.341 "rw_mbytes_per_sec": 0, 00:05:04.341 "r_mbytes_per_sec": 0, 00:05:04.341 "w_mbytes_per_sec": 0 00:05:04.341 }, 00:05:04.341 "claimed": false, 00:05:04.341 "zoned": false, 00:05:04.341 "supported_io_types": { 00:05:04.341 "read": true, 00:05:04.341 "write": true, 00:05:04.341 "unmap": true, 00:05:04.341 "write_zeroes": true, 00:05:04.341 "flush": true, 00:05:04.342 "reset": true, 00:05:04.342 "compare": false, 00:05:04.342 "compare_and_write": false, 00:05:04.342 "abort": true, 00:05:04.342 "nvme_admin": false, 00:05:04.342 "nvme_io": false 00:05:04.342 }, 00:05:04.342 "memory_domains": [ 00:05:04.342 { 00:05:04.342 "dma_device_id": "system", 00:05:04.342 "dma_device_type": 1 00:05:04.342 }, 00:05:04.342 { 00:05:04.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.342 "dma_device_type": 2 00:05:04.342 } 00:05:04.342 ], 00:05:04.342 "driver_specific": {} 00:05:04.342 } 00:05:04.342 ]' 00:05:04.342 20:11:34 -- rpc/rpc.sh@17 -- # jq length 00:05:04.601 20:11:34 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:04.601 20:11:34 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:04.601 20:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.601 20:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.601 [2024-04-24 20:11:34.616931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:04.601 [2024-04-24 20:11:34.616997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:04.601 [2024-04-24 20:11:34.617019] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:05:04.601 [2024-04-24 20:11:34.617034] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:04.601 [2024-04-24 20:11:34.619382] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:04.601 [2024-04-24 20:11:34.619427] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:04.601 Passthru0 00:05:04.601 20:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.601 20:11:34 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:04.601 20:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.601 20:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.601 20:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.601 20:11:34 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:04.601 { 00:05:04.601 "name": "Malloc2", 00:05:04.601 "aliases": [ 00:05:04.601 "1a6a05c1-54a2-46e5-9391-4e381d812640" 00:05:04.601 ], 00:05:04.601 "product_name": "Malloc disk", 00:05:04.601 "block_size": 512, 00:05:04.601 "num_blocks": 16384, 00:05:04.601 "uuid": "1a6a05c1-54a2-46e5-9391-4e381d812640", 00:05:04.601 "assigned_rate_limits": { 00:05:04.601 "rw_ios_per_sec": 0, 00:05:04.601 "rw_mbytes_per_sec": 0, 00:05:04.601 "r_mbytes_per_sec": 0, 00:05:04.601 "w_mbytes_per_sec": 0 00:05:04.601 }, 00:05:04.601 "claimed": true, 00:05:04.601 "claim_type": "exclusive_write", 00:05:04.601 "zoned": false, 00:05:04.601 "supported_io_types": { 00:05:04.601 "read": true, 00:05:04.601 "write": true, 00:05:04.601 "unmap": true, 00:05:04.601 "write_zeroes": true, 00:05:04.601 "flush": true, 00:05:04.601 "reset": true, 00:05:04.601 "compare": false, 00:05:04.601 "compare_and_write": false, 00:05:04.601 "abort": true, 00:05:04.601 "nvme_admin": false, 00:05:04.601 "nvme_io": false 00:05:04.601 }, 00:05:04.601 "memory_domains": [ 00:05:04.601 { 00:05:04.601 "dma_device_id": "system", 00:05:04.601 "dma_device_type": 1 00:05:04.601 }, 00:05:04.601 { 00:05:04.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.601 "dma_device_type": 2 00:05:04.601 } 00:05:04.601 ], 00:05:04.601 "driver_specific": {} 00:05:04.601 }, 00:05:04.601 { 00:05:04.601 "name": "Passthru0", 00:05:04.601 "aliases": [ 00:05:04.601 "bed4a607-91b0-5bc3-bf21-d37b06b80cd3" 00:05:04.601 ], 00:05:04.601 "product_name": "passthru", 00:05:04.601 "block_size": 512, 00:05:04.601 "num_blocks": 16384, 00:05:04.601 "uuid": "bed4a607-91b0-5bc3-bf21-d37b06b80cd3", 00:05:04.601 "assigned_rate_limits": { 00:05:04.601 "rw_ios_per_sec": 0, 00:05:04.601 "rw_mbytes_per_sec": 0, 00:05:04.601 "r_mbytes_per_sec": 0, 00:05:04.601 "w_mbytes_per_sec": 0 00:05:04.601 }, 00:05:04.601 "claimed": false, 00:05:04.601 "zoned": false, 00:05:04.601 "supported_io_types": { 00:05:04.601 "read": true, 00:05:04.601 "write": true, 00:05:04.601 "unmap": true, 00:05:04.601 "write_zeroes": true, 00:05:04.601 "flush": true, 00:05:04.601 "reset": true, 00:05:04.601 "compare": false, 00:05:04.601 "compare_and_write": false, 00:05:04.601 "abort": true, 00:05:04.601 "nvme_admin": false, 00:05:04.601 "nvme_io": false 00:05:04.601 }, 00:05:04.601 "memory_domains": [ 00:05:04.601 { 00:05:04.601 "dma_device_id": "system", 00:05:04.601 "dma_device_type": 1 00:05:04.601 }, 00:05:04.601 { 00:05:04.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:04.601 "dma_device_type": 2 00:05:04.601 } 00:05:04.601 ], 00:05:04.601 "driver_specific": { 00:05:04.601 "passthru": { 00:05:04.601 "name": "Passthru0", 00:05:04.601 "base_bdev_name": "Malloc2" 00:05:04.601 } 00:05:04.601 } 00:05:04.601 } 00:05:04.601 ]' 00:05:04.601 20:11:34 -- rpc/rpc.sh@21 -- # jq length 00:05:04.601 20:11:34 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:04.601 20:11:34 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:04.601 20:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.601 20:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.601 20:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.601 20:11:34 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:04.601 20:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.601 20:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.601 20:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.601 20:11:34 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:04.601 20:11:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:04.602 20:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.602 20:11:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:04.602 20:11:34 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:04.602 20:11:34 -- rpc/rpc.sh@26 -- # jq length 00:05:04.602 ************************************ 00:05:04.602 END TEST rpc_daemon_integrity 00:05:04.602 ************************************ 00:05:04.602 20:11:34 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:04.602 00:05:04.602 real 0m0.317s 00:05:04.602 user 0m0.174s 00:05:04.602 sys 0m0.045s 00:05:04.602 20:11:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:04.602 20:11:34 -- common/autotest_common.sh@10 -- # set +x 00:05:04.602 20:11:34 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:04.602 20:11:34 -- rpc/rpc.sh@84 -- # killprocess 61563 00:05:04.602 20:11:34 -- common/autotest_common.sh@936 -- # '[' -z 61563 ']' 00:05:04.602 20:11:34 -- common/autotest_common.sh@940 -- # kill -0 61563 00:05:04.602 20:11:34 -- common/autotest_common.sh@941 -- # uname 00:05:04.602 20:11:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:04.861 20:11:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61563 00:05:04.861 killing process with pid 61563 00:05:04.861 20:11:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:04.861 20:11:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:04.861 20:11:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61563' 00:05:04.861 20:11:34 -- common/autotest_common.sh@955 -- # kill 61563 00:05:04.861 20:11:34 -- common/autotest_common.sh@960 -- # wait 61563 00:05:07.398 00:05:07.398 real 0m5.959s 00:05:07.398 user 0m6.537s 00:05:07.398 sys 0m1.096s 00:05:07.398 ************************************ 00:05:07.398 END TEST rpc 00:05:07.398 ************************************ 00:05:07.398 20:11:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:07.398 20:11:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.398 20:11:37 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:07.398 20:11:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.398 20:11:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.398 20:11:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.398 ************************************ 00:05:07.398 START TEST skip_rpc 00:05:07.398 ************************************ 00:05:07.398 20:11:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:07.657 * Looking for test storage... 00:05:07.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.657 20:11:37 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:07.657 20:11:37 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:07.657 20:11:37 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:07.657 20:11:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:07.657 20:11:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:07.657 20:11:37 -- common/autotest_common.sh@10 -- # set +x 00:05:07.657 ************************************ 00:05:07.657 START TEST skip_rpc 00:05:07.657 ************************************ 00:05:07.657 20:11:37 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:05:07.657 20:11:37 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=61823 00:05:07.657 20:11:37 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.657 20:11:37 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:07.657 20:11:37 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:07.916 [2024-04-24 20:11:37.917702] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:05:07.917 [2024-04-24 20:11:37.917822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61823 ] 00:05:07.917 [2024-04-24 20:11:38.091055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.175 [2024-04-24 20:11:38.342373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.443 20:11:42 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:13.443 20:11:42 -- common/autotest_common.sh@638 -- # local es=0 00:05:13.443 20:11:42 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:13.443 20:11:42 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:13.443 20:11:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:13.443 20:11:42 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:13.443 20:11:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:13.443 20:11:42 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:13.443 20:11:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:13.443 20:11:42 -- common/autotest_common.sh@10 -- # set +x 00:05:13.443 20:11:42 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:13.443 20:11:42 -- common/autotest_common.sh@641 -- # es=1 00:05:13.443 20:11:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:13.443 20:11:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:13.443 20:11:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:13.443 20:11:42 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:13.443 20:11:42 -- rpc/skip_rpc.sh@23 -- # killprocess 61823 00:05:13.443 20:11:42 -- common/autotest_common.sh@936 -- # '[' -z 61823 ']' 00:05:13.443 20:11:42 -- common/autotest_common.sh@940 -- # kill -0 61823 00:05:13.443 20:11:42 -- common/autotest_common.sh@941 -- # uname 00:05:13.443 20:11:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:13.443 20:11:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61823 00:05:13.443 killing process with pid 61823 00:05:13.443 20:11:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:13.443 20:11:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:13.443 20:11:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61823' 00:05:13.443 20:11:42 -- common/autotest_common.sh@955 -- # kill 61823 00:05:13.443 20:11:42 -- common/autotest_common.sh@960 -- # wait 61823 00:05:15.348 ************************************ 00:05:15.348 END TEST skip_rpc 00:05:15.348 ************************************ 00:05:15.348 00:05:15.348 real 0m7.710s 00:05:15.348 user 0m7.209s 00:05:15.348 sys 0m0.404s 00:05:15.348 20:11:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.348 20:11:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.348 20:11:45 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:15.348 20:11:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.348 20:11:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.348 20:11:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.607 ************************************ 00:05:15.607 START TEST skip_rpc_with_json 00:05:15.607 ************************************ 00:05:15.607 20:11:45 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:15.607 20:11:45 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:15.607 20:11:45 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=61937 00:05:15.607 20:11:45 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.607 20:11:45 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.607 20:11:45 -- rpc/skip_rpc.sh@31 -- # waitforlisten 61937 00:05:15.607 20:11:45 -- common/autotest_common.sh@817 -- # '[' -z 61937 ']' 00:05:15.607 20:11:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.607 20:11:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:15.607 20:11:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.607 20:11:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:15.607 20:11:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.607 [2024-04-24 20:11:45.776689] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:05:15.607 [2024-04-24 20:11:45.777051] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61937 ] 00:05:15.866 [2024-04-24 20:11:45.951301] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.124 [2024-04-24 20:11:46.241483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.517 20:11:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:17.517 20:11:47 -- common/autotest_common.sh@850 -- # return 0 00:05:17.517 20:11:47 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:17.517 20:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.517 20:11:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.517 [2024-04-24 20:11:47.314270] nvmf_rpc.c:2502:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:17.517 request: 00:05:17.517 { 00:05:17.517 "trtype": "tcp", 00:05:17.517 "method": "nvmf_get_transports", 00:05:17.517 "req_id": 1 00:05:17.517 } 00:05:17.517 Got JSON-RPC error response 00:05:17.517 response: 00:05:17.517 { 00:05:17.517 "code": -19, 00:05:17.517 "message": "No such device" 00:05:17.517 } 00:05:17.517 20:11:47 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:17.517 20:11:47 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:17.517 20:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.517 20:11:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.517 [2024-04-24 20:11:47.330274] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:17.517 20:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.517 20:11:47 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:17.517 20:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:17.517 20:11:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.517 20:11:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:17.517 20:11:47 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:17.517 { 00:05:17.517 "subsystems": [ 00:05:17.517 { 00:05:17.517 "subsystem": "keyring", 00:05:17.517 "config": [] 00:05:17.517 }, 00:05:17.517 { 00:05:17.517 "subsystem": "iobuf", 00:05:17.517 "config": [ 00:05:17.517 { 00:05:17.517 "method": "iobuf_set_options", 00:05:17.517 "params": { 00:05:17.517 "small_pool_count": 8192, 00:05:17.517 "large_pool_count": 1024, 00:05:17.517 "small_bufsize": 8192, 00:05:17.517 "large_bufsize": 135168 00:05:17.517 } 00:05:17.517 } 00:05:17.517 ] 00:05:17.517 }, 00:05:17.517 { 00:05:17.517 "subsystem": "sock", 00:05:17.517 "config": [ 00:05:17.517 { 00:05:17.517 "method": "sock_impl_set_options", 00:05:17.517 "params": { 00:05:17.517 "impl_name": "posix", 00:05:17.517 "recv_buf_size": 2097152, 00:05:17.517 "send_buf_size": 2097152, 00:05:17.517 "enable_recv_pipe": true, 00:05:17.517 "enable_quickack": false, 00:05:17.517 "enable_placement_id": 0, 00:05:17.517 "enable_zerocopy_send_server": true, 00:05:17.517 "enable_zerocopy_send_client": false, 00:05:17.517 "zerocopy_threshold": 0, 00:05:17.517 "tls_version": 0, 00:05:17.517 "enable_ktls": false 00:05:17.517 } 00:05:17.517 }, 00:05:17.517 { 00:05:17.517 "method": "sock_impl_set_options", 00:05:17.517 "params": { 00:05:17.517 "impl_name": "ssl", 00:05:17.517 "recv_buf_size": 4096, 00:05:17.517 "send_buf_size": 4096, 00:05:17.517 "enable_recv_pipe": true, 00:05:17.517 "enable_quickack": false, 00:05:17.517 "enable_placement_id": 0, 00:05:17.517 "enable_zerocopy_send_server": true, 00:05:17.517 "enable_zerocopy_send_client": false, 00:05:17.517 "zerocopy_threshold": 0, 00:05:17.517 "tls_version": 0, 00:05:17.517 "enable_ktls": false 00:05:17.517 } 00:05:17.517 } 00:05:17.517 ] 00:05:17.517 }, 00:05:17.517 { 00:05:17.517 "subsystem": "vmd", 00:05:17.517 "config": [] 00:05:17.517 }, 00:05:17.517 { 00:05:17.517 "subsystem": "accel", 00:05:17.517 "config": [ 00:05:17.517 { 00:05:17.517 "method": "accel_set_options", 00:05:17.517 "params": { 00:05:17.517 "small_cache_size": 128, 00:05:17.517 "large_cache_size": 16, 00:05:17.517 "task_count": 2048, 00:05:17.517 "sequence_count": 2048, 00:05:17.517 "buf_count": 2048 00:05:17.517 } 00:05:17.517 } 00:05:17.517 ] 00:05:17.517 }, 00:05:17.517 { 00:05:17.517 "subsystem": "bdev", 00:05:17.517 "config": [ 00:05:17.517 { 00:05:17.517 "method": "bdev_set_options", 00:05:17.517 "params": { 00:05:17.517 "bdev_io_pool_size": 65535, 00:05:17.517 "bdev_io_cache_size": 256, 00:05:17.517 "bdev_auto_examine": true, 00:05:17.518 "iobuf_small_cache_size": 128, 00:05:17.518 "iobuf_large_cache_size": 16 00:05:17.518 } 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "method": "bdev_raid_set_options", 00:05:17.518 "params": { 00:05:17.518 "process_window_size_kb": 1024 00:05:17.518 } 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "method": "bdev_iscsi_set_options", 00:05:17.518 "params": { 00:05:17.518 "timeout_sec": 30 00:05:17.518 } 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "method": "bdev_nvme_set_options", 00:05:17.518 "params": { 00:05:17.518 "action_on_timeout": "none", 00:05:17.518 "timeout_us": 0, 00:05:17.518 "timeout_admin_us": 0, 00:05:17.518 "keep_alive_timeout_ms": 10000, 00:05:17.518 "arbitration_burst": 0, 00:05:17.518 "low_priority_weight": 0, 00:05:17.518 "medium_priority_weight": 0, 00:05:17.518 "high_priority_weight": 0, 00:05:17.518 "nvme_adminq_poll_period_us": 10000, 00:05:17.518 "nvme_ioq_poll_period_us": 0, 00:05:17.518 "io_queue_requests": 0, 00:05:17.518 "delay_cmd_submit": true, 00:05:17.518 "transport_retry_count": 4, 00:05:17.518 "bdev_retry_count": 3, 00:05:17.518 "transport_ack_timeout": 0, 00:05:17.518 "ctrlr_loss_timeout_sec": 0, 00:05:17.518 "reconnect_delay_sec": 0, 00:05:17.518 "fast_io_fail_timeout_sec": 0, 00:05:17.518 "disable_auto_failback": false, 00:05:17.518 "generate_uuids": false, 00:05:17.518 "transport_tos": 0, 00:05:17.518 "nvme_error_stat": false, 00:05:17.518 "rdma_srq_size": 0, 00:05:17.518 "io_path_stat": false, 00:05:17.518 "allow_accel_sequence": false, 00:05:17.518 "rdma_max_cq_size": 0, 00:05:17.518 "rdma_cm_event_timeout_ms": 0, 00:05:17.518 "dhchap_digests": [ 00:05:17.518 "sha256", 00:05:17.518 "sha384", 00:05:17.518 "sha512" 00:05:17.518 ], 00:05:17.518 "dhchap_dhgroups": [ 00:05:17.518 "null", 00:05:17.518 "ffdhe2048", 00:05:17.518 "ffdhe3072", 00:05:17.518 "ffdhe4096", 00:05:17.518 "ffdhe6144", 00:05:17.518 "ffdhe8192" 00:05:17.518 ] 00:05:17.518 } 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "method": "bdev_nvme_set_hotplug", 00:05:17.518 "params": { 00:05:17.518 "period_us": 100000, 00:05:17.518 "enable": false 00:05:17.518 } 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "method": "bdev_wait_for_examine" 00:05:17.518 } 00:05:17.518 ] 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "subsystem": "scsi", 00:05:17.518 "config": null 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "subsystem": "scheduler", 00:05:17.518 "config": [ 00:05:17.518 { 00:05:17.518 "method": "framework_set_scheduler", 00:05:17.518 "params": { 00:05:17.518 "name": "static" 00:05:17.518 } 00:05:17.518 } 00:05:17.518 ] 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "subsystem": "vhost_scsi", 00:05:17.518 "config": [] 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "subsystem": "vhost_blk", 00:05:17.518 "config": [] 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "subsystem": "ublk", 00:05:17.518 "config": [] 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "subsystem": "nbd", 00:05:17.518 "config": [] 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "subsystem": "nvmf", 00:05:17.518 "config": [ 00:05:17.518 { 00:05:17.518 "method": "nvmf_set_config", 00:05:17.518 "params": { 00:05:17.518 "discovery_filter": "match_any", 00:05:17.518 "admin_cmd_passthru": { 00:05:17.518 "identify_ctrlr": false 00:05:17.518 } 00:05:17.518 } 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "method": "nvmf_set_max_subsystems", 00:05:17.518 "params": { 00:05:17.518 "max_subsystems": 1024 00:05:17.518 } 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "method": "nvmf_set_crdt", 00:05:17.518 "params": { 00:05:17.518 "crdt1": 0, 00:05:17.518 "crdt2": 0, 00:05:17.518 "crdt3": 0 00:05:17.518 } 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "method": "nvmf_create_transport", 00:05:17.518 "params": { 00:05:17.518 "trtype": "TCP", 00:05:17.518 "max_queue_depth": 128, 00:05:17.518 "max_io_qpairs_per_ctrlr": 127, 00:05:17.518 "in_capsule_data_size": 4096, 00:05:17.518 "max_io_size": 131072, 00:05:17.518 "io_unit_size": 131072, 00:05:17.518 "max_aq_depth": 128, 00:05:17.518 "num_shared_buffers": 511, 00:05:17.518 "buf_cache_size": 4294967295, 00:05:17.518 "dif_insert_or_strip": false, 00:05:17.518 "zcopy": false, 00:05:17.518 "c2h_success": true, 00:05:17.518 "sock_priority": 0, 00:05:17.518 "abort_timeout_sec": 1, 00:05:17.518 "ack_timeout": 0 00:05:17.518 } 00:05:17.518 } 00:05:17.518 ] 00:05:17.518 }, 00:05:17.518 { 00:05:17.518 "subsystem": "iscsi", 00:05:17.518 "config": [ 00:05:17.518 { 00:05:17.518 "method": "iscsi_set_options", 00:05:17.518 "params": { 00:05:17.518 "node_base": "iqn.2016-06.io.spdk", 00:05:17.518 "max_sessions": 128, 00:05:17.518 "max_connections_per_session": 2, 00:05:17.518 "max_queue_depth": 64, 00:05:17.518 "default_time2wait": 2, 00:05:17.518 "default_time2retain": 20, 00:05:17.518 "first_burst_length": 8192, 00:05:17.518 "immediate_data": true, 00:05:17.518 "allow_duplicated_isid": false, 00:05:17.518 "error_recovery_level": 0, 00:05:17.518 "nop_timeout": 60, 00:05:17.518 "nop_in_interval": 30, 00:05:17.518 "disable_chap": false, 00:05:17.518 "require_chap": false, 00:05:17.518 "mutual_chap": false, 00:05:17.518 "chap_group": 0, 00:05:17.518 "max_large_datain_per_connection": 64, 00:05:17.518 "max_r2t_per_connection": 4, 00:05:17.518 "pdu_pool_size": 36864, 00:05:17.518 "immediate_data_pool_size": 16384, 00:05:17.518 "data_out_pool_size": 2048 00:05:17.518 } 00:05:17.518 } 00:05:17.518 ] 00:05:17.518 } 00:05:17.518 ] 00:05:17.518 } 00:05:17.518 20:11:47 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:17.518 20:11:47 -- rpc/skip_rpc.sh@40 -- # killprocess 61937 00:05:17.518 20:11:47 -- common/autotest_common.sh@936 -- # '[' -z 61937 ']' 00:05:17.518 20:11:47 -- common/autotest_common.sh@940 -- # kill -0 61937 00:05:17.518 20:11:47 -- common/autotest_common.sh@941 -- # uname 00:05:17.518 20:11:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:17.518 20:11:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61937 00:05:17.518 20:11:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:17.518 20:11:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:17.518 killing process with pid 61937 00:05:17.518 20:11:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61937' 00:05:17.518 20:11:47 -- common/autotest_common.sh@955 -- # kill 61937 00:05:17.518 20:11:47 -- common/autotest_common.sh@960 -- # wait 61937 00:05:20.054 20:11:50 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=61993 00:05:20.054 20:11:50 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:20.054 20:11:50 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:25.323 20:11:55 -- rpc/skip_rpc.sh@50 -- # killprocess 61993 00:05:25.323 20:11:55 -- common/autotest_common.sh@936 -- # '[' -z 61993 ']' 00:05:25.323 20:11:55 -- common/autotest_common.sh@940 -- # kill -0 61993 00:05:25.323 20:11:55 -- common/autotest_common.sh@941 -- # uname 00:05:25.323 20:11:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:25.323 20:11:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61993 00:05:25.323 killing process with pid 61993 00:05:25.323 20:11:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:25.323 20:11:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:25.323 20:11:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61993' 00:05:25.323 20:11:55 -- common/autotest_common.sh@955 -- # kill 61993 00:05:25.323 20:11:55 -- common/autotest_common.sh@960 -- # wait 61993 00:05:27.853 20:11:57 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:27.853 20:11:57 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:27.853 00:05:27.853 real 0m12.207s 00:05:27.853 user 0m11.443s 00:05:27.853 sys 0m1.053s 00:05:27.853 ************************************ 00:05:27.853 END TEST skip_rpc_with_json 00:05:27.853 ************************************ 00:05:27.853 20:11:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:27.853 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:05:27.853 20:11:57 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:27.853 20:11:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:27.853 20:11:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:27.853 20:11:57 -- common/autotest_common.sh@10 -- # set +x 00:05:27.853 ************************************ 00:05:27.853 START TEST skip_rpc_with_delay 00:05:27.853 ************************************ 00:05:27.853 20:11:58 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:27.853 20:11:58 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.853 20:11:58 -- common/autotest_common.sh@638 -- # local es=0 00:05:27.853 20:11:58 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:27.853 20:11:58 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.853 20:11:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:27.853 20:11:58 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.853 20:11:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:27.853 20:11:58 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.853 20:11:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:27.853 20:11:58 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:27.853 20:11:58 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:27.853 20:11:58 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:28.112 [2024-04-24 20:11:58.135258] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:28.112 [2024-04-24 20:11:58.135392] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:28.112 20:11:58 -- common/autotest_common.sh@641 -- # es=1 00:05:28.112 20:11:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:28.112 20:11:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:28.112 ************************************ 00:05:28.112 END TEST skip_rpc_with_delay 00:05:28.112 ************************************ 00:05:28.112 20:11:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:28.112 00:05:28.112 real 0m0.178s 00:05:28.112 user 0m0.086s 00:05:28.112 sys 0m0.091s 00:05:28.112 20:11:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:28.112 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.112 20:11:58 -- rpc/skip_rpc.sh@77 -- # uname 00:05:28.112 20:11:58 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:28.112 20:11:58 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:28.112 20:11:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.112 20:11:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.112 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.371 ************************************ 00:05:28.371 START TEST exit_on_failed_rpc_init 00:05:28.371 ************************************ 00:05:28.371 20:11:58 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:28.371 20:11:58 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=62140 00:05:28.371 20:11:58 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.371 20:11:58 -- rpc/skip_rpc.sh@63 -- # waitforlisten 62140 00:05:28.371 20:11:58 -- common/autotest_common.sh@817 -- # '[' -z 62140 ']' 00:05:28.371 20:11:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.371 20:11:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:28.371 20:11:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.371 20:11:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:28.371 20:11:58 -- common/autotest_common.sh@10 -- # set +x 00:05:28.371 [2024-04-24 20:11:58.466662] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:05:28.371 [2024-04-24 20:11:58.466808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62140 ] 00:05:28.630 [2024-04-24 20:11:58.640886] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.888 [2024-04-24 20:11:58.895494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.825 20:11:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:29.825 20:11:59 -- common/autotest_common.sh@850 -- # return 0 00:05:29.825 20:11:59 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.825 20:11:59 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.825 20:11:59 -- common/autotest_common.sh@638 -- # local es=0 00:05:29.825 20:11:59 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.825 20:11:59 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.825 20:11:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:29.825 20:11:59 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.825 20:11:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:29.825 20:11:59 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.825 20:11:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:29.825 20:11:59 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.825 20:11:59 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:29.825 20:11:59 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.825 [2024-04-24 20:12:00.017573] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:05:29.825 [2024-04-24 20:12:00.017837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62158 ] 00:05:30.084 [2024-04-24 20:12:00.189837] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.343 [2024-04-24 20:12:00.446470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.343 [2024-04-24 20:12:00.446748] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:30.343 [2024-04-24 20:12:00.447158] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:30.343 [2024-04-24 20:12:00.447452] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:30.939 20:12:00 -- common/autotest_common.sh@641 -- # es=234 00:05:30.939 20:12:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:30.940 20:12:00 -- common/autotest_common.sh@650 -- # es=106 00:05:30.940 20:12:00 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:30.940 20:12:00 -- common/autotest_common.sh@658 -- # es=1 00:05:30.940 20:12:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:30.940 20:12:00 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:30.940 20:12:00 -- rpc/skip_rpc.sh@70 -- # killprocess 62140 00:05:30.940 20:12:00 -- common/autotest_common.sh@936 -- # '[' -z 62140 ']' 00:05:30.940 20:12:00 -- common/autotest_common.sh@940 -- # kill -0 62140 00:05:30.940 20:12:00 -- common/autotest_common.sh@941 -- # uname 00:05:30.940 20:12:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:30.940 20:12:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62140 00:05:30.940 20:12:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:30.940 20:12:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:30.940 20:12:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62140' 00:05:30.940 killing process with pid 62140 00:05:30.940 20:12:00 -- common/autotest_common.sh@955 -- # kill 62140 00:05:30.940 20:12:00 -- common/autotest_common.sh@960 -- # wait 62140 00:05:33.471 00:05:33.471 real 0m5.219s 00:05:33.471 user 0m5.851s 00:05:33.471 sys 0m0.636s 00:05:33.471 ************************************ 00:05:33.471 END TEST exit_on_failed_rpc_init 00:05:33.471 ************************************ 00:05:33.471 20:12:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.471 20:12:03 -- common/autotest_common.sh@10 -- # set +x 00:05:33.471 20:12:03 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:33.471 ************************************ 00:05:33.471 END TEST skip_rpc 00:05:33.471 ************************************ 00:05:33.471 00:05:33.471 real 0m26.044s 00:05:33.471 user 0m24.842s 00:05:33.471 sys 0m2.611s 00:05:33.471 20:12:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.471 20:12:03 -- common/autotest_common.sh@10 -- # set +x 00:05:33.471 20:12:03 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:33.471 20:12:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.471 20:12:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.471 20:12:03 -- common/autotest_common.sh@10 -- # set +x 00:05:33.730 ************************************ 00:05:33.730 START TEST rpc_client 00:05:33.730 ************************************ 00:05:33.730 20:12:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:33.730 * Looking for test storage... 00:05:33.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:33.730 20:12:03 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:33.989 OK 00:05:33.989 20:12:03 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:33.989 00:05:33.989 real 0m0.209s 00:05:33.989 user 0m0.102s 00:05:33.989 sys 0m0.118s 00:05:33.989 20:12:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.989 20:12:03 -- common/autotest_common.sh@10 -- # set +x 00:05:33.989 ************************************ 00:05:33.989 END TEST rpc_client 00:05:33.989 ************************************ 00:05:33.989 20:12:04 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:33.989 20:12:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.989 20:12:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.989 20:12:04 -- common/autotest_common.sh@10 -- # set +x 00:05:33.989 ************************************ 00:05:33.989 START TEST json_config 00:05:33.989 ************************************ 00:05:33.989 20:12:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:33.989 20:12:04 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:34.248 20:12:04 -- nvmf/common.sh@7 -- # uname -s 00:05:34.248 20:12:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.248 20:12:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.248 20:12:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.248 20:12:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.248 20:12:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.248 20:12:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.248 20:12:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.248 20:12:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.248 20:12:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.248 20:12:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.248 20:12:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9198a36b-a46e-4e0f-a169-b7f1c9873fac 00:05:34.248 20:12:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=9198a36b-a46e-4e0f-a169-b7f1c9873fac 00:05:34.248 20:12:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.248 20:12:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.248 20:12:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.249 20:12:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.249 20:12:04 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:34.249 20:12:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.249 20:12:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.249 20:12:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.249 20:12:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.249 20:12:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.249 20:12:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.249 20:12:04 -- paths/export.sh@5 -- # export PATH 00:05:34.249 20:12:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.249 20:12:04 -- nvmf/common.sh@47 -- # : 0 00:05:34.249 20:12:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:34.249 20:12:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:34.249 20:12:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.249 20:12:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.249 20:12:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.249 20:12:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:34.249 20:12:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:34.249 20:12:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:34.249 20:12:04 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:34.249 20:12:04 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:34.249 20:12:04 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:34.249 20:12:04 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:34.249 20:12:04 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:34.249 WARNING: No tests are enabled so not running JSON configuration tests 00:05:34.249 20:12:04 -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:34.249 20:12:04 -- json_config/json_config.sh@28 -- # exit 0 00:05:34.249 00:05:34.249 real 0m0.122s 00:05:34.249 user 0m0.055s 00:05:34.249 sys 0m0.068s 00:05:34.249 20:12:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:34.249 20:12:04 -- common/autotest_common.sh@10 -- # set +x 00:05:34.249 ************************************ 00:05:34.249 END TEST json_config 00:05:34.249 ************************************ 00:05:34.249 20:12:04 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:34.249 20:12:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:34.249 20:12:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:34.249 20:12:04 -- common/autotest_common.sh@10 -- # set +x 00:05:34.249 ************************************ 00:05:34.249 START TEST json_config_extra_key 00:05:34.249 ************************************ 00:05:34.249 20:12:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:34.249 20:12:04 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:34.249 20:12:04 -- nvmf/common.sh@7 -- # uname -s 00:05:34.249 20:12:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.249 20:12:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.249 20:12:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.249 20:12:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.249 20:12:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.249 20:12:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.249 20:12:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.249 20:12:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.249 20:12:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.249 20:12:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.249 20:12:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9198a36b-a46e-4e0f-a169-b7f1c9873fac 00:05:34.249 20:12:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=9198a36b-a46e-4e0f-a169-b7f1c9873fac 00:05:34.249 20:12:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.249 20:12:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.249 20:12:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:34.249 20:12:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.249 20:12:04 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:34.509 20:12:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.509 20:12:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.509 20:12:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.509 20:12:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.509 20:12:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.509 20:12:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.509 20:12:04 -- paths/export.sh@5 -- # export PATH 00:05:34.509 20:12:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.509 20:12:04 -- nvmf/common.sh@47 -- # : 0 00:05:34.509 20:12:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:34.509 20:12:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:34.509 20:12:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.509 20:12:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.509 20:12:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.509 20:12:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:34.509 20:12:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:34.509 20:12:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:34.509 20:12:04 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:34.509 20:12:04 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:34.509 20:12:04 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:34.509 20:12:04 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:34.509 20:12:04 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:34.509 20:12:04 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:34.509 20:12:04 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:34.509 20:12:04 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:34.509 20:12:04 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:34.509 20:12:04 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:34.509 INFO: launching applications... 00:05:34.509 20:12:04 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:34.509 20:12:04 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:34.509 20:12:04 -- json_config/common.sh@9 -- # local app=target 00:05:34.509 20:12:04 -- json_config/common.sh@10 -- # shift 00:05:34.509 20:12:04 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:34.509 20:12:04 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:34.509 20:12:04 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:34.509 20:12:04 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.509 20:12:04 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:34.509 Waiting for target to run... 00:05:34.509 20:12:04 -- json_config/common.sh@22 -- # app_pid["$app"]=62371 00:05:34.509 20:12:04 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:34.509 20:12:04 -- json_config/common.sh@25 -- # waitforlisten 62371 /var/tmp/spdk_tgt.sock 00:05:34.509 20:12:04 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:34.509 20:12:04 -- common/autotest_common.sh@817 -- # '[' -z 62371 ']' 00:05:34.509 20:12:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.509 20:12:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:34.509 20:12:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.509 20:12:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:34.509 20:12:04 -- common/autotest_common.sh@10 -- # set +x 00:05:34.509 [2024-04-24 20:12:04.601563] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:05:34.509 [2024-04-24 20:12:04.601682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62371 ] 00:05:34.768 [2024-04-24 20:12:04.992985] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.027 [2024-04-24 20:12:05.228138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.963 20:12:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:35.963 20:12:06 -- common/autotest_common.sh@850 -- # return 0 00:05:35.963 00:05:35.963 20:12:06 -- json_config/common.sh@26 -- # echo '' 00:05:35.963 INFO: shutting down applications... 00:05:35.963 20:12:06 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:35.963 20:12:06 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:35.963 20:12:06 -- json_config/common.sh@31 -- # local app=target 00:05:35.963 20:12:06 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:35.963 20:12:06 -- json_config/common.sh@35 -- # [[ -n 62371 ]] 00:05:35.963 20:12:06 -- json_config/common.sh@38 -- # kill -SIGINT 62371 00:05:35.963 20:12:06 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:35.963 20:12:06 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:35.963 20:12:06 -- json_config/common.sh@41 -- # kill -0 62371 00:05:35.963 20:12:06 -- json_config/common.sh@45 -- # sleep 0.5 00:05:36.532 20:12:06 -- json_config/common.sh@40 -- # (( i++ )) 00:05:36.532 20:12:06 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.532 20:12:06 -- json_config/common.sh@41 -- # kill -0 62371 00:05:36.532 20:12:06 -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.100 20:12:07 -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.100 20:12:07 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.100 20:12:07 -- json_config/common.sh@41 -- # kill -0 62371 00:05:37.100 20:12:07 -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.716 20:12:07 -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.716 20:12:07 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.716 20:12:07 -- json_config/common.sh@41 -- # kill -0 62371 00:05:37.716 20:12:07 -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.015 20:12:08 -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.015 20:12:08 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.015 20:12:08 -- json_config/common.sh@41 -- # kill -0 62371 00:05:38.015 20:12:08 -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.581 20:12:08 -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.581 20:12:08 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.581 20:12:08 -- json_config/common.sh@41 -- # kill -0 62371 00:05:38.581 20:12:08 -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.149 20:12:09 -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.149 20:12:09 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.149 20:12:09 -- json_config/common.sh@41 -- # kill -0 62371 00:05:39.149 20:12:09 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:39.149 20:12:09 -- json_config/common.sh@43 -- # break 00:05:39.149 SPDK target shutdown done 00:05:39.149 20:12:09 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:39.149 20:12:09 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:39.149 Success 00:05:39.149 20:12:09 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:39.149 00:05:39.149 real 0m4.785s 00:05:39.149 user 0m4.515s 00:05:39.149 sys 0m0.574s 00:05:39.149 20:12:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.149 ************************************ 00:05:39.149 END TEST json_config_extra_key 00:05:39.149 ************************************ 00:05:39.149 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:05:39.149 20:12:09 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.149 20:12:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.149 20:12:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.149 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:05:39.149 ************************************ 00:05:39.149 START TEST alias_rpc 00:05:39.149 ************************************ 00:05:39.149 20:12:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.407 * Looking for test storage... 00:05:39.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:39.407 20:12:09 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:39.407 20:12:09 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.407 20:12:09 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=62480 00:05:39.407 20:12:09 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 62480 00:05:39.407 20:12:09 -- common/autotest_common.sh@817 -- # '[' -z 62480 ']' 00:05:39.407 20:12:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.407 20:12:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:39.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.407 20:12:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.407 20:12:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:39.407 20:12:09 -- common/autotest_common.sh@10 -- # set +x 00:05:39.407 [2024-04-24 20:12:09.548972] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:05:39.407 [2024-04-24 20:12:09.549118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62480 ] 00:05:39.665 [2024-04-24 20:12:09.733669] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.923 [2024-04-24 20:12:09.977790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.859 20:12:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:40.859 20:12:10 -- common/autotest_common.sh@850 -- # return 0 00:05:40.859 20:12:10 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:41.117 20:12:11 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 62480 00:05:41.117 20:12:11 -- common/autotest_common.sh@936 -- # '[' -z 62480 ']' 00:05:41.117 20:12:11 -- common/autotest_common.sh@940 -- # kill -0 62480 00:05:41.117 20:12:11 -- common/autotest_common.sh@941 -- # uname 00:05:41.117 20:12:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:41.117 20:12:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62480 00:05:41.117 20:12:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:41.117 20:12:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:41.117 killing process with pid 62480 00:05:41.117 20:12:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62480' 00:05:41.117 20:12:11 -- common/autotest_common.sh@955 -- # kill 62480 00:05:41.117 20:12:11 -- common/autotest_common.sh@960 -- # wait 62480 00:05:43.647 00:05:43.647 real 0m4.559s 00:05:43.647 user 0m4.477s 00:05:43.647 sys 0m0.605s 00:05:43.647 20:12:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.647 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:05:43.647 ************************************ 00:05:43.647 END TEST alias_rpc 00:05:43.647 ************************************ 00:05:43.906 20:12:13 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:43.906 20:12:13 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:43.906 20:12:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.906 20:12:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.906 20:12:13 -- common/autotest_common.sh@10 -- # set +x 00:05:43.906 ************************************ 00:05:43.906 START TEST spdkcli_tcp 00:05:43.906 ************************************ 00:05:43.906 20:12:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:44.167 * Looking for test storage... 00:05:44.167 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:44.167 20:12:14 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:44.167 20:12:14 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:44.167 20:12:14 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:44.167 20:12:14 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:44.167 20:12:14 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:44.167 20:12:14 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:44.167 20:12:14 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:44.167 20:12:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:44.167 20:12:14 -- common/autotest_common.sh@10 -- # set +x 00:05:44.167 20:12:14 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=62590 00:05:44.167 20:12:14 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:44.167 20:12:14 -- spdkcli/tcp.sh@27 -- # waitforlisten 62590 00:05:44.167 20:12:14 -- common/autotest_common.sh@817 -- # '[' -z 62590 ']' 00:05:44.167 20:12:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.167 20:12:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:44.167 20:12:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.167 20:12:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:44.167 20:12:14 -- common/autotest_common.sh@10 -- # set +x 00:05:44.167 [2024-04-24 20:12:14.299253] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:05:44.167 [2024-04-24 20:12:14.299388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62590 ] 00:05:44.427 [2024-04-24 20:12:14.475332] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.687 [2024-04-24 20:12:14.732100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.687 [2024-04-24 20:12:14.732135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.620 20:12:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:45.620 20:12:15 -- common/autotest_common.sh@850 -- # return 0 00:05:45.620 20:12:15 -- spdkcli/tcp.sh@31 -- # socat_pid=62613 00:05:45.620 20:12:15 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:45.620 20:12:15 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:45.879 [ 00:05:45.879 "bdev_malloc_delete", 00:05:45.879 "bdev_malloc_create", 00:05:45.879 "bdev_null_resize", 00:05:45.879 "bdev_null_delete", 00:05:45.879 "bdev_null_create", 00:05:45.879 "bdev_nvme_cuse_unregister", 00:05:45.879 "bdev_nvme_cuse_register", 00:05:45.879 "bdev_opal_new_user", 00:05:45.879 "bdev_opal_set_lock_state", 00:05:45.879 "bdev_opal_delete", 00:05:45.879 "bdev_opal_get_info", 00:05:45.879 "bdev_opal_create", 00:05:45.879 "bdev_nvme_opal_revert", 00:05:45.879 "bdev_nvme_opal_init", 00:05:45.879 "bdev_nvme_send_cmd", 00:05:45.879 "bdev_nvme_get_path_iostat", 00:05:45.879 "bdev_nvme_get_mdns_discovery_info", 00:05:45.879 "bdev_nvme_stop_mdns_discovery", 00:05:45.879 "bdev_nvme_start_mdns_discovery", 00:05:45.879 "bdev_nvme_set_multipath_policy", 00:05:45.879 "bdev_nvme_set_preferred_path", 00:05:45.879 "bdev_nvme_get_io_paths", 00:05:45.879 "bdev_nvme_remove_error_injection", 00:05:45.879 "bdev_nvme_add_error_injection", 00:05:45.879 "bdev_nvme_get_discovery_info", 00:05:45.879 "bdev_nvme_stop_discovery", 00:05:45.879 "bdev_nvme_start_discovery", 00:05:45.879 "bdev_nvme_get_controller_health_info", 00:05:45.879 "bdev_nvme_disable_controller", 00:05:45.879 "bdev_nvme_enable_controller", 00:05:45.879 "bdev_nvme_reset_controller", 00:05:45.879 "bdev_nvme_get_transport_statistics", 00:05:45.879 "bdev_nvme_apply_firmware", 00:05:45.879 "bdev_nvme_detach_controller", 00:05:45.879 "bdev_nvme_get_controllers", 00:05:45.879 "bdev_nvme_attach_controller", 00:05:45.879 "bdev_nvme_set_hotplug", 00:05:45.879 "bdev_nvme_set_options", 00:05:45.879 "bdev_passthru_delete", 00:05:45.879 "bdev_passthru_create", 00:05:45.879 "bdev_lvol_grow_lvstore", 00:05:45.879 "bdev_lvol_get_lvols", 00:05:45.879 "bdev_lvol_get_lvstores", 00:05:45.879 "bdev_lvol_delete", 00:05:45.879 "bdev_lvol_set_read_only", 00:05:45.879 "bdev_lvol_resize", 00:05:45.879 "bdev_lvol_decouple_parent", 00:05:45.879 "bdev_lvol_inflate", 00:05:45.879 "bdev_lvol_rename", 00:05:45.879 "bdev_lvol_clone_bdev", 00:05:45.879 "bdev_lvol_clone", 00:05:45.879 "bdev_lvol_snapshot", 00:05:45.879 "bdev_lvol_create", 00:05:45.879 "bdev_lvol_delete_lvstore", 00:05:45.879 "bdev_lvol_rename_lvstore", 00:05:45.879 "bdev_lvol_create_lvstore", 00:05:45.879 "bdev_raid_set_options", 00:05:45.879 "bdev_raid_remove_base_bdev", 00:05:45.879 "bdev_raid_add_base_bdev", 00:05:45.879 "bdev_raid_delete", 00:05:45.879 "bdev_raid_create", 00:05:45.879 "bdev_raid_get_bdevs", 00:05:45.879 "bdev_error_inject_error", 00:05:45.879 "bdev_error_delete", 00:05:45.879 "bdev_error_create", 00:05:45.879 "bdev_split_delete", 00:05:45.879 "bdev_split_create", 00:05:45.879 "bdev_delay_delete", 00:05:45.879 "bdev_delay_create", 00:05:45.879 "bdev_delay_update_latency", 00:05:45.879 "bdev_zone_block_delete", 00:05:45.879 "bdev_zone_block_create", 00:05:45.879 "blobfs_create", 00:05:45.879 "blobfs_detect", 00:05:45.879 "blobfs_set_cache_size", 00:05:45.879 "bdev_xnvme_delete", 00:05:45.879 "bdev_xnvme_create", 00:05:45.879 "bdev_aio_delete", 00:05:45.879 "bdev_aio_rescan", 00:05:45.879 "bdev_aio_create", 00:05:45.879 "bdev_ftl_set_property", 00:05:45.879 "bdev_ftl_get_properties", 00:05:45.879 "bdev_ftl_get_stats", 00:05:45.879 "bdev_ftl_unmap", 00:05:45.879 "bdev_ftl_unload", 00:05:45.879 "bdev_ftl_delete", 00:05:45.879 "bdev_ftl_load", 00:05:45.879 "bdev_ftl_create", 00:05:45.879 "bdev_virtio_attach_controller", 00:05:45.879 "bdev_virtio_scsi_get_devices", 00:05:45.879 "bdev_virtio_detach_controller", 00:05:45.879 "bdev_virtio_blk_set_hotplug", 00:05:45.879 "bdev_iscsi_delete", 00:05:45.879 "bdev_iscsi_create", 00:05:45.879 "bdev_iscsi_set_options", 00:05:45.879 "accel_error_inject_error", 00:05:45.879 "ioat_scan_accel_module", 00:05:45.879 "dsa_scan_accel_module", 00:05:45.879 "iaa_scan_accel_module", 00:05:45.879 "keyring_file_remove_key", 00:05:45.879 "keyring_file_add_key", 00:05:45.879 "iscsi_set_options", 00:05:45.879 "iscsi_get_auth_groups", 00:05:45.879 "iscsi_auth_group_remove_secret", 00:05:45.879 "iscsi_auth_group_add_secret", 00:05:45.879 "iscsi_delete_auth_group", 00:05:45.879 "iscsi_create_auth_group", 00:05:45.879 "iscsi_set_discovery_auth", 00:05:45.879 "iscsi_get_options", 00:05:45.879 "iscsi_target_node_request_logout", 00:05:45.879 "iscsi_target_node_set_redirect", 00:05:45.879 "iscsi_target_node_set_auth", 00:05:45.879 "iscsi_target_node_add_lun", 00:05:45.879 "iscsi_get_stats", 00:05:45.879 "iscsi_get_connections", 00:05:45.879 "iscsi_portal_group_set_auth", 00:05:45.879 "iscsi_start_portal_group", 00:05:45.879 "iscsi_delete_portal_group", 00:05:45.879 "iscsi_create_portal_group", 00:05:45.879 "iscsi_get_portal_groups", 00:05:45.879 "iscsi_delete_target_node", 00:05:45.879 "iscsi_target_node_remove_pg_ig_maps", 00:05:45.879 "iscsi_target_node_add_pg_ig_maps", 00:05:45.879 "iscsi_create_target_node", 00:05:45.879 "iscsi_get_target_nodes", 00:05:45.879 "iscsi_delete_initiator_group", 00:05:45.879 "iscsi_initiator_group_remove_initiators", 00:05:45.879 "iscsi_initiator_group_add_initiators", 00:05:45.879 "iscsi_create_initiator_group", 00:05:45.879 "iscsi_get_initiator_groups", 00:05:45.879 "nvmf_set_crdt", 00:05:45.879 "nvmf_set_config", 00:05:45.879 "nvmf_set_max_subsystems", 00:05:45.879 "nvmf_subsystem_get_listeners", 00:05:45.880 "nvmf_subsystem_get_qpairs", 00:05:45.880 "nvmf_subsystem_get_controllers", 00:05:45.880 "nvmf_get_stats", 00:05:45.880 "nvmf_get_transports", 00:05:45.880 "nvmf_create_transport", 00:05:45.880 "nvmf_get_targets", 00:05:45.880 "nvmf_delete_target", 00:05:45.880 "nvmf_create_target", 00:05:45.880 "nvmf_subsystem_allow_any_host", 00:05:45.880 "nvmf_subsystem_remove_host", 00:05:45.880 "nvmf_subsystem_add_host", 00:05:45.880 "nvmf_ns_remove_host", 00:05:45.880 "nvmf_ns_add_host", 00:05:45.880 "nvmf_subsystem_remove_ns", 00:05:45.880 "nvmf_subsystem_add_ns", 00:05:45.880 "nvmf_subsystem_listener_set_ana_state", 00:05:45.880 "nvmf_discovery_get_referrals", 00:05:45.880 "nvmf_discovery_remove_referral", 00:05:45.880 "nvmf_discovery_add_referral", 00:05:45.880 "nvmf_subsystem_remove_listener", 00:05:45.880 "nvmf_subsystem_add_listener", 00:05:45.880 "nvmf_delete_subsystem", 00:05:45.880 "nvmf_create_subsystem", 00:05:45.880 "nvmf_get_subsystems", 00:05:45.880 "env_dpdk_get_mem_stats", 00:05:45.880 "nbd_get_disks", 00:05:45.880 "nbd_stop_disk", 00:05:45.880 "nbd_start_disk", 00:05:45.880 "ublk_recover_disk", 00:05:45.880 "ublk_get_disks", 00:05:45.880 "ublk_stop_disk", 00:05:45.880 "ublk_start_disk", 00:05:45.880 "ublk_destroy_target", 00:05:45.880 "ublk_create_target", 00:05:45.880 "virtio_blk_create_transport", 00:05:45.880 "virtio_blk_get_transports", 00:05:45.880 "vhost_controller_set_coalescing", 00:05:45.880 "vhost_get_controllers", 00:05:45.880 "vhost_delete_controller", 00:05:45.880 "vhost_create_blk_controller", 00:05:45.880 "vhost_scsi_controller_remove_target", 00:05:45.880 "vhost_scsi_controller_add_target", 00:05:45.880 "vhost_start_scsi_controller", 00:05:45.880 "vhost_create_scsi_controller", 00:05:45.880 "thread_set_cpumask", 00:05:45.880 "framework_get_scheduler", 00:05:45.880 "framework_set_scheduler", 00:05:45.880 "framework_get_reactors", 00:05:45.880 "thread_get_io_channels", 00:05:45.880 "thread_get_pollers", 00:05:45.880 "thread_get_stats", 00:05:45.880 "framework_monitor_context_switch", 00:05:45.880 "spdk_kill_instance", 00:05:45.880 "log_enable_timestamps", 00:05:45.880 "log_get_flags", 00:05:45.880 "log_clear_flag", 00:05:45.880 "log_set_flag", 00:05:45.880 "log_get_level", 00:05:45.880 "log_set_level", 00:05:45.880 "log_get_print_level", 00:05:45.880 "log_set_print_level", 00:05:45.880 "framework_enable_cpumask_locks", 00:05:45.880 "framework_disable_cpumask_locks", 00:05:45.880 "framework_wait_init", 00:05:45.880 "framework_start_init", 00:05:45.880 "scsi_get_devices", 00:05:45.880 "bdev_get_histogram", 00:05:45.880 "bdev_enable_histogram", 00:05:45.880 "bdev_set_qos_limit", 00:05:45.880 "bdev_set_qd_sampling_period", 00:05:45.880 "bdev_get_bdevs", 00:05:45.880 "bdev_reset_iostat", 00:05:45.880 "bdev_get_iostat", 00:05:45.880 "bdev_examine", 00:05:45.880 "bdev_wait_for_examine", 00:05:45.880 "bdev_set_options", 00:05:45.880 "notify_get_notifications", 00:05:45.880 "notify_get_types", 00:05:45.880 "accel_get_stats", 00:05:45.880 "accel_set_options", 00:05:45.880 "accel_set_driver", 00:05:45.880 "accel_crypto_key_destroy", 00:05:45.880 "accel_crypto_keys_get", 00:05:45.880 "accel_crypto_key_create", 00:05:45.880 "accel_assign_opc", 00:05:45.880 "accel_get_module_info", 00:05:45.880 "accel_get_opc_assignments", 00:05:45.880 "vmd_rescan", 00:05:45.880 "vmd_remove_device", 00:05:45.880 "vmd_enable", 00:05:45.880 "sock_set_default_impl", 00:05:45.880 "sock_impl_set_options", 00:05:45.880 "sock_impl_get_options", 00:05:45.880 "iobuf_get_stats", 00:05:45.880 "iobuf_set_options", 00:05:45.880 "framework_get_pci_devices", 00:05:45.880 "framework_get_config", 00:05:45.880 "framework_get_subsystems", 00:05:45.880 "trace_get_info", 00:05:45.880 "trace_get_tpoint_group_mask", 00:05:45.880 "trace_disable_tpoint_group", 00:05:45.880 "trace_enable_tpoint_group", 00:05:45.880 "trace_clear_tpoint_mask", 00:05:45.880 "trace_set_tpoint_mask", 00:05:45.880 "keyring_get_keys", 00:05:45.880 "spdk_get_version", 00:05:45.880 "rpc_get_methods" 00:05:45.880 ] 00:05:45.880 20:12:15 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:45.880 20:12:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:45.880 20:12:15 -- common/autotest_common.sh@10 -- # set +x 00:05:45.880 20:12:15 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:45.880 20:12:15 -- spdkcli/tcp.sh@38 -- # killprocess 62590 00:05:45.880 20:12:15 -- common/autotest_common.sh@936 -- # '[' -z 62590 ']' 00:05:45.880 20:12:15 -- common/autotest_common.sh@940 -- # kill -0 62590 00:05:45.880 20:12:16 -- common/autotest_common.sh@941 -- # uname 00:05:45.880 20:12:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:45.880 20:12:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62590 00:05:45.880 20:12:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:45.880 20:12:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:45.880 killing process with pid 62590 00:05:45.880 20:12:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62590' 00:05:45.880 20:12:16 -- common/autotest_common.sh@955 -- # kill 62590 00:05:45.880 20:12:16 -- common/autotest_common.sh@960 -- # wait 62590 00:05:49.164 00:05:49.164 real 0m4.622s 00:05:49.164 user 0m7.970s 00:05:49.164 sys 0m0.658s 00:05:49.164 20:12:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:49.164 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:05:49.164 ************************************ 00:05:49.164 END TEST spdkcli_tcp 00:05:49.164 ************************************ 00:05:49.164 20:12:18 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:49.164 20:12:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:49.164 20:12:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:49.164 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:05:49.164 ************************************ 00:05:49.164 START TEST dpdk_mem_utility 00:05:49.164 ************************************ 00:05:49.164 20:12:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:49.164 * Looking for test storage... 00:05:49.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:49.164 20:12:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:49.164 20:12:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=62717 00:05:49.164 20:12:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:49.164 20:12:18 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 62717 00:05:49.164 20:12:18 -- common/autotest_common.sh@817 -- # '[' -z 62717 ']' 00:05:49.164 20:12:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.164 20:12:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:49.164 20:12:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.164 20:12:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:49.164 20:12:18 -- common/autotest_common.sh@10 -- # set +x 00:05:49.164 [2024-04-24 20:12:19.048379] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:05:49.164 [2024-04-24 20:12:19.048495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62717 ] 00:05:49.164 [2024-04-24 20:12:19.223813] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.422 [2024-04-24 20:12:19.483464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.357 20:12:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:50.357 20:12:20 -- common/autotest_common.sh@850 -- # return 0 00:05:50.357 20:12:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:50.357 20:12:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:50.357 20:12:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:50.357 20:12:20 -- common/autotest_common.sh@10 -- # set +x 00:05:50.357 { 00:05:50.357 "filename": "/tmp/spdk_mem_dump.txt" 00:05:50.357 } 00:05:50.357 20:12:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:50.357 20:12:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:50.357 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:50.357 1 heaps totaling size 820.000000 MiB 00:05:50.357 size: 820.000000 MiB heap id: 0 00:05:50.357 end heaps---------- 00:05:50.357 8 mempools totaling size 598.116089 MiB 00:05:50.357 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:50.357 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:50.357 size: 84.521057 MiB name: bdev_io_62717 00:05:50.357 size: 51.011292 MiB name: evtpool_62717 00:05:50.357 size: 50.003479 MiB name: msgpool_62717 00:05:50.357 size: 21.763794 MiB name: PDU_Pool 00:05:50.357 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:50.357 size: 0.026123 MiB name: Session_Pool 00:05:50.357 end mempools------- 00:05:50.357 6 memzones totaling size 4.142822 MiB 00:05:50.357 size: 1.000366 MiB name: RG_ring_0_62717 00:05:50.357 size: 1.000366 MiB name: RG_ring_1_62717 00:05:50.357 size: 1.000366 MiB name: RG_ring_4_62717 00:05:50.357 size: 1.000366 MiB name: RG_ring_5_62717 00:05:50.357 size: 0.125366 MiB name: RG_ring_2_62717 00:05:50.357 size: 0.015991 MiB name: RG_ring_3_62717 00:05:50.357 end memzones------- 00:05:50.357 20:12:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:50.616 heap id: 0 total size: 820.000000 MiB number of busy elements: 297 number of free elements: 18 00:05:50.616 list of free elements. size: 18.452271 MiB 00:05:50.616 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:50.616 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:50.616 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:50.616 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:50.616 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:50.616 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:50.616 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:50.616 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:50.616 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:50.616 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:50.616 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:50.616 element at address: 0x200000200000 with size: 0.830200 MiB 00:05:50.616 element at address: 0x20001b000000 with size: 0.564880 MiB 00:05:50.616 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:50.616 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:50.616 element at address: 0x200013800000 with size: 0.467651 MiB 00:05:50.616 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:50.616 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:50.616 list of standard malloc elements. size: 199.283325 MiB 00:05:50.616 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:50.616 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:50.616 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:50.616 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:50.616 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:50.616 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:50.616 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:50.616 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:50.616 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:50.616 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:50.616 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:50.616 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:50.616 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200013877b80 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:50.617 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:50.618 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:50.618 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:50.618 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:50.618 list of memzone associated elements. size: 602.264404 MiB 00:05:50.618 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:50.618 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:50.618 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:50.618 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:50.618 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:50.618 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_62717_0 00:05:50.618 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:50.618 associated memzone info: size: 48.002930 MiB name: MP_evtpool_62717_0 00:05:50.618 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:50.618 associated memzone info: size: 48.002930 MiB name: MP_msgpool_62717_0 00:05:50.618 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:50.618 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:50.618 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:50.618 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:50.618 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:50.618 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_62717 00:05:50.618 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:50.618 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_62717 00:05:50.618 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:50.618 associated memzone info: size: 1.007996 MiB name: MP_evtpool_62717 00:05:50.618 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:50.618 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:50.618 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:50.618 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:50.618 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:50.618 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:50.618 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:50.618 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:50.618 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:50.618 associated memzone info: size: 1.000366 MiB name: RG_ring_0_62717 00:05:50.618 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:50.618 associated memzone info: size: 1.000366 MiB name: RG_ring_1_62717 00:05:50.618 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:50.618 associated memzone info: size: 1.000366 MiB name: RG_ring_4_62717 00:05:50.618 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:50.618 associated memzone info: size: 1.000366 MiB name: RG_ring_5_62717 00:05:50.618 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:50.618 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_62717 00:05:50.618 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:50.618 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:50.618 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:50.618 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:50.618 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:50.618 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:50.618 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:50.618 associated memzone info: size: 0.125366 MiB name: RG_ring_2_62717 00:05:50.618 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:50.618 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:50.618 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:50.618 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:50.618 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:50.618 associated memzone info: size: 0.015991 MiB name: RG_ring_3_62717 00:05:50.618 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:50.618 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:50.619 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:50.619 associated memzone info: size: 0.000183 MiB name: MP_msgpool_62717 00:05:50.619 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:50.619 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_62717 00:05:50.619 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:50.619 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:50.619 20:12:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:50.619 20:12:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 62717 00:05:50.619 20:12:20 -- common/autotest_common.sh@936 -- # '[' -z 62717 ']' 00:05:50.619 20:12:20 -- common/autotest_common.sh@940 -- # kill -0 62717 00:05:50.619 20:12:20 -- common/autotest_common.sh@941 -- # uname 00:05:50.619 20:12:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:50.619 20:12:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62717 00:05:50.619 20:12:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:50.619 killing process with pid 62717 00:05:50.619 20:12:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:50.619 20:12:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62717' 00:05:50.619 20:12:20 -- common/autotest_common.sh@955 -- # kill 62717 00:05:50.619 20:12:20 -- common/autotest_common.sh@960 -- # wait 62717 00:05:53.145 00:05:53.145 real 0m4.416s 00:05:53.145 user 0m4.419s 00:05:53.145 sys 0m0.551s 00:05:53.145 20:12:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:53.145 20:12:23 -- common/autotest_common.sh@10 -- # set +x 00:05:53.145 ************************************ 00:05:53.145 END TEST dpdk_mem_utility 00:05:53.145 ************************************ 00:05:53.145 20:12:23 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:53.145 20:12:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:53.145 20:12:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.145 20:12:23 -- common/autotest_common.sh@10 -- # set +x 00:05:53.403 ************************************ 00:05:53.404 START TEST event 00:05:53.404 ************************************ 00:05:53.404 20:12:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:53.404 * Looking for test storage... 00:05:53.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:53.404 20:12:23 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:53.404 20:12:23 -- bdev/nbd_common.sh@6 -- # set -e 00:05:53.404 20:12:23 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.404 20:12:23 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:53.404 20:12:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:53.404 20:12:23 -- common/autotest_common.sh@10 -- # set +x 00:05:53.404 ************************************ 00:05:53.404 START TEST event_perf 00:05:53.404 ************************************ 00:05:53.404 20:12:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:53.686 Running I/O for 1 seconds...[2024-04-24 20:12:23.674918] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:05:53.686 [2024-04-24 20:12:23.675062] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62838 ] 00:05:53.686 [2024-04-24 20:12:23.859018] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:53.945 [2024-04-24 20:12:24.123049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.945 [2024-04-24 20:12:24.123170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.945 [2024-04-24 20:12:24.123266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.945 Running I/O for 1 seconds...[2024-04-24 20:12:24.123297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.323 00:05:55.323 lcore 0: 190100 00:05:55.323 lcore 1: 190100 00:05:55.323 lcore 2: 190101 00:05:55.323 lcore 3: 190100 00:05:55.581 done. 00:05:55.581 00:05:55.581 real 0m1.959s 00:05:55.581 user 0m4.697s 00:05:55.581 sys 0m0.138s 00:05:55.581 20:12:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.581 20:12:25 -- common/autotest_common.sh@10 -- # set +x 00:05:55.581 ************************************ 00:05:55.581 END TEST event_perf 00:05:55.581 ************************************ 00:05:55.581 20:12:25 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:55.581 20:12:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:55.581 20:12:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.581 20:12:25 -- common/autotest_common.sh@10 -- # set +x 00:05:55.581 ************************************ 00:05:55.581 START TEST event_reactor 00:05:55.581 ************************************ 00:05:55.581 20:12:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:55.581 [2024-04-24 20:12:25.794019] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:05:55.581 [2024-04-24 20:12:25.794139] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62887 ] 00:05:55.840 [2024-04-24 20:12:25.970509] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.098 [2024-04-24 20:12:26.222020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.473 test_start 00:05:57.473 oneshot 00:05:57.473 tick 100 00:05:57.473 tick 100 00:05:57.473 tick 250 00:05:57.473 tick 100 00:05:57.473 tick 100 00:05:57.473 tick 100 00:05:57.473 tick 250 00:05:57.473 tick 500 00:05:57.473 tick 100 00:05:57.473 tick 100 00:05:57.473 tick 250 00:05:57.473 tick 100 00:05:57.473 tick 100 00:05:57.473 test_end 00:05:57.473 00:05:57.473 real 0m1.931s 00:05:57.473 user 0m1.694s 00:05:57.473 sys 0m0.126s 00:05:57.473 20:12:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.473 ************************************ 00:05:57.473 END TEST event_reactor 00:05:57.473 ************************************ 00:05:57.473 20:12:27 -- common/autotest_common.sh@10 -- # set +x 00:05:57.732 20:12:27 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:57.732 20:12:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:57.732 20:12:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.732 20:12:27 -- common/autotest_common.sh@10 -- # set +x 00:05:57.732 ************************************ 00:05:57.732 START TEST event_reactor_perf 00:05:57.732 ************************************ 00:05:57.732 20:12:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:57.732 [2024-04-24 20:12:27.886267] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:05:57.732 [2024-04-24 20:12:27.886380] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62933 ] 00:05:57.991 [2024-04-24 20:12:28.061179] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.249 [2024-04-24 20:12:28.320222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.621 test_start 00:05:59.621 test_end 00:05:59.621 Performance: 336331 events per second 00:05:59.621 00:05:59.621 real 0m1.939s 00:05:59.621 user 0m1.695s 00:05:59.621 sys 0m0.134s 00:05:59.621 20:12:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.621 20:12:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.621 ************************************ 00:05:59.621 END TEST event_reactor_perf 00:05:59.621 ************************************ 00:05:59.621 20:12:29 -- event/event.sh@49 -- # uname -s 00:05:59.621 20:12:29 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:59.621 20:12:29 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:59.621 20:12:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.621 20:12:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.621 20:12:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.878 ************************************ 00:05:59.878 START TEST event_scheduler 00:05:59.878 ************************************ 00:05:59.879 20:12:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:59.879 * Looking for test storage... 00:05:59.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:59.879 20:12:30 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:59.879 20:12:30 -- scheduler/scheduler.sh@35 -- # scheduler_pid=63006 00:05:59.879 20:12:30 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:59.879 20:12:30 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.879 20:12:30 -- scheduler/scheduler.sh@37 -- # waitforlisten 63006 00:05:59.879 20:12:30 -- common/autotest_common.sh@817 -- # '[' -z 63006 ']' 00:05:59.879 20:12:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.879 20:12:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:59.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.879 20:12:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.879 20:12:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:59.879 20:12:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.140 [2024-04-24 20:12:30.155540] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:06:00.140 [2024-04-24 20:12:30.155689] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63006 ] 00:06:00.140 [2024-04-24 20:12:30.332512] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.397 [2024-04-24 20:12:30.591753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.397 [2024-04-24 20:12:30.591916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.397 [2024-04-24 20:12:30.592042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.397 [2024-04-24 20:12:30.592142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.963 20:12:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:00.963 20:12:31 -- common/autotest_common.sh@850 -- # return 0 00:06:00.963 20:12:31 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:00.963 20:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.963 20:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:00.963 POWER: Env isn't set yet! 00:06:00.963 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:00.963 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.963 POWER: Cannot set governor of lcore 0 to userspace 00:06:00.963 POWER: Attempting to initialise PSTAT power management... 00:06:00.963 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.963 POWER: Cannot set governor of lcore 0 to performance 00:06:00.963 POWER: Attempting to initialise AMD PSTATE power management... 00:06:00.963 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.963 POWER: Cannot set governor of lcore 0 to userspace 00:06:00.963 POWER: Attempting to initialise CPPC power management... 00:06:00.963 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:00.963 POWER: Cannot set governor of lcore 0 to userspace 00:06:00.963 POWER: Attempting to initialise VM power management... 00:06:00.963 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:00.963 POWER: Unable to set Power Management Environment for lcore 0 00:06:00.963 [2024-04-24 20:12:31.013215] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:00.963 [2024-04-24 20:12:31.013245] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:00.963 [2024-04-24 20:12:31.013264] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:00.963 20:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:00.963 20:12:31 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:00.963 20:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:00.963 20:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.221 [2024-04-24 20:12:31.416471] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:01.221 20:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.221 20:12:31 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:01.221 20:12:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.221 20:12:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.221 20:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 ************************************ 00:06:01.479 START TEST scheduler_create_thread 00:06:01.479 ************************************ 00:06:01.479 20:12:31 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:06:01.479 20:12:31 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:01.479 20:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.479 20:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 2 00:06:01.479 20:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.479 20:12:31 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:01.479 20:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.479 20:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 3 00:06:01.479 20:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.479 20:12:31 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:01.479 20:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.479 20:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 4 00:06:01.479 20:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.479 20:12:31 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:01.479 20:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.479 20:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 5 00:06:01.479 20:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.479 20:12:31 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:01.479 20:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.479 20:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 6 00:06:01.479 20:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.479 20:12:31 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:01.479 20:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.479 20:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 7 00:06:01.479 20:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.479 20:12:31 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:01.479 20:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.479 20:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 8 00:06:01.479 20:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.479 20:12:31 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:01.479 20:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.479 20:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 9 00:06:01.479 20:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.479 20:12:31 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:01.479 20:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.479 20:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.479 10 00:06:01.479 20:12:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.479 20:12:31 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:01.479 20:12:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.479 20:12:31 -- common/autotest_common.sh@10 -- # set +x 00:06:02.852 20:12:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.852 20:12:32 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:02.852 20:12:32 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:02.852 20:12:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.852 20:12:32 -- common/autotest_common.sh@10 -- # set +x 00:06:03.789 20:12:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:03.789 20:12:33 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:03.789 20:12:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:03.789 20:12:33 -- common/autotest_common.sh@10 -- # set +x 00:06:04.357 20:12:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:04.357 20:12:34 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:04.357 20:12:34 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:04.357 20:12:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:04.357 20:12:34 -- common/autotest_common.sh@10 -- # set +x 00:06:05.300 20:12:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:05.300 00:06:05.300 real 0m3.787s 00:06:05.300 user 0m0.014s 00:06:05.300 sys 0m0.008s 00:06:05.300 20:12:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.300 ************************************ 00:06:05.300 END TEST scheduler_create_thread 00:06:05.300 20:12:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.300 ************************************ 00:06:05.300 20:12:35 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:05.300 20:12:35 -- scheduler/scheduler.sh@46 -- # killprocess 63006 00:06:05.300 20:12:35 -- common/autotest_common.sh@936 -- # '[' -z 63006 ']' 00:06:05.300 20:12:35 -- common/autotest_common.sh@940 -- # kill -0 63006 00:06:05.300 20:12:35 -- common/autotest_common.sh@941 -- # uname 00:06:05.300 20:12:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:05.300 20:12:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63006 00:06:05.300 20:12:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:05.300 20:12:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:05.300 killing process with pid 63006 00:06:05.300 20:12:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63006' 00:06:05.300 20:12:35 -- common/autotest_common.sh@955 -- # kill 63006 00:06:05.300 20:12:35 -- common/autotest_common.sh@960 -- # wait 63006 00:06:05.558 [2024-04-24 20:12:35.580865] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:06.935 00:06:06.935 real 0m7.065s 00:06:06.935 user 0m15.639s 00:06:06.935 sys 0m0.597s 00:06:06.935 20:12:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:06.935 20:12:36 -- common/autotest_common.sh@10 -- # set +x 00:06:06.935 ************************************ 00:06:06.935 END TEST event_scheduler 00:06:06.935 ************************************ 00:06:06.935 20:12:37 -- event/event.sh@51 -- # modprobe -n nbd 00:06:06.935 20:12:37 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:06.935 20:12:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:06.935 20:12:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.935 20:12:37 -- common/autotest_common.sh@10 -- # set +x 00:06:06.935 ************************************ 00:06:06.935 START TEST app_repeat 00:06:06.935 ************************************ 00:06:06.935 20:12:37 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:06:06.935 20:12:37 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.935 20:12:37 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.935 20:12:37 -- event/event.sh@13 -- # local nbd_list 00:06:06.935 20:12:37 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.935 20:12:37 -- event/event.sh@14 -- # local bdev_list 00:06:06.935 20:12:37 -- event/event.sh@15 -- # local repeat_times=4 00:06:06.935 20:12:37 -- event/event.sh@17 -- # modprobe nbd 00:06:06.935 20:12:37 -- event/event.sh@19 -- # repeat_pid=63143 00:06:06.935 20:12:37 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:06.935 20:12:37 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:06.935 Process app_repeat pid: 63143 00:06:06.935 20:12:37 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 63143' 00:06:06.935 20:12:37 -- event/event.sh@23 -- # for i in {0..2} 00:06:06.935 spdk_app_start Round 0 00:06:06.935 20:12:37 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:06.935 20:12:37 -- event/event.sh@25 -- # waitforlisten 63143 /var/tmp/spdk-nbd.sock 00:06:06.935 20:12:37 -- common/autotest_common.sh@817 -- # '[' -z 63143 ']' 00:06:06.935 20:12:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:06.935 20:12:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:06.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:06.935 20:12:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:06.935 20:12:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:06.935 20:12:37 -- common/autotest_common.sh@10 -- # set +x 00:06:07.194 [2024-04-24 20:12:37.206294] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:06:07.194 [2024-04-24 20:12:37.206415] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63143 ] 00:06:07.194 [2024-04-24 20:12:37.379029] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.452 [2024-04-24 20:12:37.632198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.452 [2024-04-24 20:12:37.632231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.019 20:12:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:08.019 20:12:38 -- common/autotest_common.sh@850 -- # return 0 00:06:08.019 20:12:38 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.276 Malloc0 00:06:08.276 20:12:38 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.535 Malloc1 00:06:08.535 20:12:38 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@12 -- # local i 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.535 20:12:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.793 /dev/nbd0 00:06:08.793 20:12:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.793 20:12:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.793 20:12:38 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:08.793 20:12:38 -- common/autotest_common.sh@855 -- # local i 00:06:08.793 20:12:38 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:08.793 20:12:38 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:08.793 20:12:38 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:08.793 20:12:38 -- common/autotest_common.sh@859 -- # break 00:06:08.793 20:12:38 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:08.793 20:12:38 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:08.793 20:12:38 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.793 1+0 records in 00:06:08.793 1+0 records out 00:06:08.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298523 s, 13.7 MB/s 00:06:08.793 20:12:38 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.793 20:12:38 -- common/autotest_common.sh@872 -- # size=4096 00:06:08.793 20:12:38 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:08.793 20:12:38 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:08.793 20:12:38 -- common/autotest_common.sh@875 -- # return 0 00:06:08.793 20:12:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.793 20:12:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.793 20:12:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.051 /dev/nbd1 00:06:09.051 20:12:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.051 20:12:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.051 20:12:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:09.051 20:12:39 -- common/autotest_common.sh@855 -- # local i 00:06:09.051 20:12:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:09.051 20:12:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:09.051 20:12:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:09.051 20:12:39 -- common/autotest_common.sh@859 -- # break 00:06:09.051 20:12:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:09.051 20:12:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:09.052 20:12:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.052 1+0 records in 00:06:09.052 1+0 records out 00:06:09.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00033449 s, 12.2 MB/s 00:06:09.052 20:12:39 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.052 20:12:39 -- common/autotest_common.sh@872 -- # size=4096 00:06:09.052 20:12:39 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.052 20:12:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:09.052 20:12:39 -- common/autotest_common.sh@875 -- # return 0 00:06:09.052 20:12:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.052 20:12:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.052 20:12:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.052 20:12:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.052 20:12:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.311 { 00:06:09.311 "nbd_device": "/dev/nbd0", 00:06:09.311 "bdev_name": "Malloc0" 00:06:09.311 }, 00:06:09.311 { 00:06:09.311 "nbd_device": "/dev/nbd1", 00:06:09.311 "bdev_name": "Malloc1" 00:06:09.311 } 00:06:09.311 ]' 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.311 { 00:06:09.311 "nbd_device": "/dev/nbd0", 00:06:09.311 "bdev_name": "Malloc0" 00:06:09.311 }, 00:06:09.311 { 00:06:09.311 "nbd_device": "/dev/nbd1", 00:06:09.311 "bdev_name": "Malloc1" 00:06:09.311 } 00:06:09.311 ]' 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.311 /dev/nbd1' 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.311 /dev/nbd1' 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.311 256+0 records in 00:06:09.311 256+0 records out 00:06:09.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0073126 s, 143 MB/s 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.311 256+0 records in 00:06:09.311 256+0 records out 00:06:09.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274671 s, 38.2 MB/s 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.311 256+0 records in 00:06:09.311 256+0 records out 00:06:09.311 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.040041 s, 26.2 MB/s 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@51 -- # local i 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.311 20:12:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.570 20:12:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.570 20:12:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.570 20:12:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.570 20:12:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.570 20:12:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.570 20:12:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.570 20:12:39 -- bdev/nbd_common.sh@41 -- # break 00:06:09.570 20:12:39 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.570 20:12:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.570 20:12:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.829 20:12:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.829 20:12:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.829 20:12:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.829 20:12:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.829 20:12:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.829 20:12:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.829 20:12:39 -- bdev/nbd_common.sh@41 -- # break 00:06:09.829 20:12:39 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.829 20:12:39 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.829 20:12:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.829 20:12:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.095 20:12:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.095 20:12:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.095 20:12:40 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.095 20:12:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.095 20:12:40 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.095 20:12:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.095 20:12:40 -- bdev/nbd_common.sh@65 -- # true 00:06:10.095 20:12:40 -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.095 20:12:40 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.095 20:12:40 -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.095 20:12:40 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.095 20:12:40 -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.095 20:12:40 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.664 20:12:40 -- event/event.sh@35 -- # sleep 3 00:06:12.046 [2024-04-24 20:12:42.050947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.306 [2024-04-24 20:12:42.298965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.306 [2024-04-24 20:12:42.298976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.563 [2024-04-24 20:12:42.553442] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.563 [2024-04-24 20:12:42.553518] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.494 spdk_app_start Round 1 00:06:13.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.494 20:12:43 -- event/event.sh@23 -- # for i in {0..2} 00:06:13.494 20:12:43 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:13.494 20:12:43 -- event/event.sh@25 -- # waitforlisten 63143 /var/tmp/spdk-nbd.sock 00:06:13.494 20:12:43 -- common/autotest_common.sh@817 -- # '[' -z 63143 ']' 00:06:13.494 20:12:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.494 20:12:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:13.494 20:12:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.494 20:12:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:13.494 20:12:43 -- common/autotest_common.sh@10 -- # set +x 00:06:13.752 20:12:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:13.752 20:12:43 -- common/autotest_common.sh@850 -- # return 0 00:06:13.752 20:12:43 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.009 Malloc0 00:06:14.009 20:12:44 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.266 Malloc1 00:06:14.266 20:12:44 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@12 -- # local i 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.266 20:12:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:14.524 /dev/nbd0 00:06:14.524 20:12:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:14.524 20:12:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:14.524 20:12:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:14.524 20:12:44 -- common/autotest_common.sh@855 -- # local i 00:06:14.524 20:12:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:14.524 20:12:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:14.524 20:12:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:14.525 20:12:44 -- common/autotest_common.sh@859 -- # break 00:06:14.525 20:12:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:14.525 20:12:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:14.525 20:12:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.525 1+0 records in 00:06:14.525 1+0 records out 00:06:14.525 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312594 s, 13.1 MB/s 00:06:14.525 20:12:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.525 20:12:44 -- common/autotest_common.sh@872 -- # size=4096 00:06:14.525 20:12:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.525 20:12:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:14.525 20:12:44 -- common/autotest_common.sh@875 -- # return 0 00:06:14.525 20:12:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.525 20:12:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.525 20:12:44 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:14.782 /dev/nbd1 00:06:14.783 20:12:44 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:14.783 20:12:44 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:14.783 20:12:44 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:14.783 20:12:44 -- common/autotest_common.sh@855 -- # local i 00:06:14.783 20:12:44 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:14.783 20:12:44 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:14.783 20:12:44 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:14.783 20:12:44 -- common/autotest_common.sh@859 -- # break 00:06:14.783 20:12:44 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:14.783 20:12:44 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:14.783 20:12:44 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:14.783 1+0 records in 00:06:14.783 1+0 records out 00:06:14.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348259 s, 11.8 MB/s 00:06:14.783 20:12:44 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.783 20:12:44 -- common/autotest_common.sh@872 -- # size=4096 00:06:14.783 20:12:44 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:14.783 20:12:44 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:14.783 20:12:44 -- common/autotest_common.sh@875 -- # return 0 00:06:14.783 20:12:44 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:14.783 20:12:44 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:14.783 20:12:44 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.783 20:12:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.783 20:12:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.041 { 00:06:15.041 "nbd_device": "/dev/nbd0", 00:06:15.041 "bdev_name": "Malloc0" 00:06:15.041 }, 00:06:15.041 { 00:06:15.041 "nbd_device": "/dev/nbd1", 00:06:15.041 "bdev_name": "Malloc1" 00:06:15.041 } 00:06:15.041 ]' 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.041 { 00:06:15.041 "nbd_device": "/dev/nbd0", 00:06:15.041 "bdev_name": "Malloc0" 00:06:15.041 }, 00:06:15.041 { 00:06:15.041 "nbd_device": "/dev/nbd1", 00:06:15.041 "bdev_name": "Malloc1" 00:06:15.041 } 00:06:15.041 ]' 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.041 /dev/nbd1' 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.041 /dev/nbd1' 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.041 256+0 records in 00:06:15.041 256+0 records out 00:06:15.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134764 s, 77.8 MB/s 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.041 256+0 records in 00:06:15.041 256+0 records out 00:06:15.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0217998 s, 48.1 MB/s 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.041 256+0 records in 00:06:15.041 256+0 records out 00:06:15.041 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0321545 s, 32.6 MB/s 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@51 -- # local i 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.041 20:12:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:15.298 20:12:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:15.298 20:12:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:15.298 20:12:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:15.298 20:12:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.298 20:12:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.298 20:12:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:15.298 20:12:45 -- bdev/nbd_common.sh@41 -- # break 00:06:15.298 20:12:45 -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.298 20:12:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:15.298 20:12:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:15.556 20:12:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:15.556 20:12:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:15.556 20:12:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:15.556 20:12:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:15.556 20:12:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:15.556 20:12:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:15.556 20:12:45 -- bdev/nbd_common.sh@41 -- # break 00:06:15.556 20:12:45 -- bdev/nbd_common.sh@45 -- # return 0 00:06:15.556 20:12:45 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.556 20:12:45 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.556 20:12:45 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.814 20:12:45 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:15.814 20:12:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.814 20:12:45 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:15.814 20:12:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:15.814 20:12:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.814 20:12:45 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:15.814 20:12:45 -- bdev/nbd_common.sh@65 -- # true 00:06:15.814 20:12:45 -- bdev/nbd_common.sh@65 -- # count=0 00:06:15.814 20:12:45 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:15.814 20:12:45 -- bdev/nbd_common.sh@104 -- # count=0 00:06:15.814 20:12:45 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:15.814 20:12:45 -- bdev/nbd_common.sh@109 -- # return 0 00:06:15.814 20:12:45 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.093 20:12:46 -- event/event.sh@35 -- # sleep 3 00:06:17.492 [2024-04-24 20:12:47.523264] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.750 [2024-04-24 20:12:47.755457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.750 [2024-04-24 20:12:47.755475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.008 [2024-04-24 20:12:47.985227] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:18.008 [2024-04-24 20:12:47.985316] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:19.382 spdk_app_start Round 2 00:06:19.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:19.382 20:12:49 -- event/event.sh@23 -- # for i in {0..2} 00:06:19.382 20:12:49 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:19.382 20:12:49 -- event/event.sh@25 -- # waitforlisten 63143 /var/tmp/spdk-nbd.sock 00:06:19.382 20:12:49 -- common/autotest_common.sh@817 -- # '[' -z 63143 ']' 00:06:19.382 20:12:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:19.382 20:12:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:19.382 20:12:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:19.382 20:12:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:19.382 20:12:49 -- common/autotest_common.sh@10 -- # set +x 00:06:19.382 20:12:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:19.382 20:12:49 -- common/autotest_common.sh@850 -- # return 0 00:06:19.382 20:12:49 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.640 Malloc0 00:06:19.640 20:12:49 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:19.897 Malloc1 00:06:19.897 20:12:49 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@12 -- # local i 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.897 20:12:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.155 /dev/nbd0 00:06:20.155 20:12:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.155 20:12:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.155 20:12:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:20.155 20:12:50 -- common/autotest_common.sh@855 -- # local i 00:06:20.155 20:12:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:20.155 20:12:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:20.155 20:12:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:20.155 20:12:50 -- common/autotest_common.sh@859 -- # break 00:06:20.155 20:12:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:20.155 20:12:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:20.155 20:12:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.155 1+0 records in 00:06:20.155 1+0 records out 00:06:20.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291035 s, 14.1 MB/s 00:06:20.155 20:12:50 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.155 20:12:50 -- common/autotest_common.sh@872 -- # size=4096 00:06:20.155 20:12:50 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.155 20:12:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:20.155 20:12:50 -- common/autotest_common.sh@875 -- # return 0 00:06:20.155 20:12:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.155 20:12:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.155 20:12:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:20.413 /dev/nbd1 00:06:20.413 20:12:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:20.413 20:12:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:20.413 20:12:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:20.413 20:12:50 -- common/autotest_common.sh@855 -- # local i 00:06:20.413 20:12:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:20.413 20:12:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:20.413 20:12:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:20.413 20:12:50 -- common/autotest_common.sh@859 -- # break 00:06:20.413 20:12:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:20.413 20:12:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:20.413 20:12:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.413 1+0 records in 00:06:20.413 1+0 records out 00:06:20.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327777 s, 12.5 MB/s 00:06:20.413 20:12:50 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.413 20:12:50 -- common/autotest_common.sh@872 -- # size=4096 00:06:20.413 20:12:50 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:20.413 20:12:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:20.413 20:12:50 -- common/autotest_common.sh@875 -- # return 0 00:06:20.413 20:12:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.413 20:12:50 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.413 20:12:50 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.413 20:12:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.413 20:12:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.671 20:12:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:20.671 { 00:06:20.671 "nbd_device": "/dev/nbd0", 00:06:20.671 "bdev_name": "Malloc0" 00:06:20.671 }, 00:06:20.671 { 00:06:20.671 "nbd_device": "/dev/nbd1", 00:06:20.671 "bdev_name": "Malloc1" 00:06:20.671 } 00:06:20.671 ]' 00:06:20.671 20:12:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:20.671 { 00:06:20.671 "nbd_device": "/dev/nbd0", 00:06:20.672 "bdev_name": "Malloc0" 00:06:20.672 }, 00:06:20.672 { 00:06:20.672 "nbd_device": "/dev/nbd1", 00:06:20.672 "bdev_name": "Malloc1" 00:06:20.672 } 00:06:20.672 ]' 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:20.672 /dev/nbd1' 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:20.672 /dev/nbd1' 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@65 -- # count=2 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@95 -- # count=2 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:20.672 256+0 records in 00:06:20.672 256+0 records out 00:06:20.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116004 s, 90.4 MB/s 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:20.672 256+0 records in 00:06:20.672 256+0 records out 00:06:20.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248637 s, 42.2 MB/s 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:20.672 256+0 records in 00:06:20.672 256+0 records out 00:06:20.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320876 s, 32.7 MB/s 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@51 -- # local i 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.672 20:12:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.930 20:12:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.930 20:12:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.930 20:12:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.930 20:12:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.930 20:12:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.930 20:12:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.930 20:12:51 -- bdev/nbd_common.sh@41 -- # break 00:06:20.930 20:12:51 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.930 20:12:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.930 20:12:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.188 20:12:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.188 20:12:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.188 20:12:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.188 20:12:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.188 20:12:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.188 20:12:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.188 20:12:51 -- bdev/nbd_common.sh@41 -- # break 00:06:21.188 20:12:51 -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.188 20:12:51 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.188 20:12:51 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.188 20:12:51 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.188 20:12:51 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.188 20:12:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.188 20:12:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.445 20:12:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.445 20:12:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.445 20:12:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.445 20:12:51 -- bdev/nbd_common.sh@65 -- # true 00:06:21.445 20:12:51 -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.445 20:12:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.445 20:12:51 -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.445 20:12:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.445 20:12:51 -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.445 20:12:51 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.702 20:12:51 -- event/event.sh@35 -- # sleep 3 00:06:23.076 [2024-04-24 20:12:53.136329] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.334 [2024-04-24 20:12:53.371195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.334 [2024-04-24 20:12:53.371196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.592 [2024-04-24 20:12:53.611555] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:23.592 [2024-04-24 20:12:53.611633] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.970 20:12:54 -- event/event.sh@38 -- # waitforlisten 63143 /var/tmp/spdk-nbd.sock 00:06:24.970 20:12:54 -- common/autotest_common.sh@817 -- # '[' -z 63143 ']' 00:06:24.970 20:12:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.970 20:12:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:24.970 20:12:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.970 20:12:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:24.970 20:12:54 -- common/autotest_common.sh@10 -- # set +x 00:06:24.970 20:12:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:24.970 20:12:55 -- common/autotest_common.sh@850 -- # return 0 00:06:24.970 20:12:55 -- event/event.sh@39 -- # killprocess 63143 00:06:24.970 20:12:55 -- common/autotest_common.sh@936 -- # '[' -z 63143 ']' 00:06:24.970 20:12:55 -- common/autotest_common.sh@940 -- # kill -0 63143 00:06:24.970 20:12:55 -- common/autotest_common.sh@941 -- # uname 00:06:24.970 20:12:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.970 20:12:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63143 00:06:24.970 killing process with pid 63143 00:06:24.970 20:12:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:24.970 20:12:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:24.970 20:12:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63143' 00:06:24.970 20:12:55 -- common/autotest_common.sh@955 -- # kill 63143 00:06:24.970 20:12:55 -- common/autotest_common.sh@960 -- # wait 63143 00:06:26.374 spdk_app_start is called in Round 0. 00:06:26.374 Shutdown signal received, stop current app iteration 00:06:26.374 Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 reinitialization... 00:06:26.374 spdk_app_start is called in Round 1. 00:06:26.374 Shutdown signal received, stop current app iteration 00:06:26.374 Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 reinitialization... 00:06:26.374 spdk_app_start is called in Round 2. 00:06:26.374 Shutdown signal received, stop current app iteration 00:06:26.374 Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 reinitialization... 00:06:26.374 spdk_app_start is called in Round 3. 00:06:26.374 Shutdown signal received, stop current app iteration 00:06:26.374 ************************************ 00:06:26.374 END TEST app_repeat 00:06:26.374 ************************************ 00:06:26.374 20:12:56 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:26.374 20:12:56 -- event/event.sh@42 -- # return 0 00:06:26.374 00:06:26.374 real 0m19.125s 00:06:26.374 user 0m39.010s 00:06:26.374 sys 0m2.990s 00:06:26.374 20:12:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.374 20:12:56 -- common/autotest_common.sh@10 -- # set +x 00:06:26.374 20:12:56 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:26.374 20:12:56 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:26.374 20:12:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.374 20:12:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.374 20:12:56 -- common/autotest_common.sh@10 -- # set +x 00:06:26.374 ************************************ 00:06:26.374 START TEST cpu_locks 00:06:26.374 ************************************ 00:06:26.374 20:12:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:26.374 * Looking for test storage... 00:06:26.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:26.374 20:12:56 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:26.374 20:12:56 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:26.374 20:12:56 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:26.374 20:12:56 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:26.374 20:12:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.374 20:12:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.374 20:12:56 -- common/autotest_common.sh@10 -- # set +x 00:06:26.374 ************************************ 00:06:26.374 START TEST default_locks 00:06:26.374 ************************************ 00:06:26.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.374 20:12:56 -- common/autotest_common.sh@1111 -- # default_locks 00:06:26.374 20:12:56 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=63587 00:06:26.374 20:12:56 -- event/cpu_locks.sh@47 -- # waitforlisten 63587 00:06:26.374 20:12:56 -- common/autotest_common.sh@817 -- # '[' -z 63587 ']' 00:06:26.374 20:12:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.374 20:12:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:26.374 20:12:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.374 20:12:56 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.374 20:12:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:26.374 20:12:56 -- common/autotest_common.sh@10 -- # set +x 00:06:26.633 [2024-04-24 20:12:56.690267] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:06:26.633 [2024-04-24 20:12:56.690384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63587 ] 00:06:26.633 [2024-04-24 20:12:56.861550] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.891 [2024-04-24 20:12:57.097851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.827 20:12:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:27.827 20:12:58 -- common/autotest_common.sh@850 -- # return 0 00:06:27.827 20:12:58 -- event/cpu_locks.sh@49 -- # locks_exist 63587 00:06:27.827 20:12:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.827 20:12:58 -- event/cpu_locks.sh@22 -- # lslocks -p 63587 00:06:28.393 20:12:58 -- event/cpu_locks.sh@50 -- # killprocess 63587 00:06:28.393 20:12:58 -- common/autotest_common.sh@936 -- # '[' -z 63587 ']' 00:06:28.393 20:12:58 -- common/autotest_common.sh@940 -- # kill -0 63587 00:06:28.393 20:12:58 -- common/autotest_common.sh@941 -- # uname 00:06:28.393 20:12:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:28.393 20:12:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63587 00:06:28.393 killing process with pid 63587 00:06:28.393 20:12:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:28.393 20:12:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:28.393 20:12:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63587' 00:06:28.393 20:12:58 -- common/autotest_common.sh@955 -- # kill 63587 00:06:28.393 20:12:58 -- common/autotest_common.sh@960 -- # wait 63587 00:06:30.927 20:13:00 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 63587 00:06:30.927 20:13:00 -- common/autotest_common.sh@638 -- # local es=0 00:06:30.927 20:13:00 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 63587 00:06:30.927 20:13:00 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:30.927 20:13:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:30.927 20:13:00 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:30.927 20:13:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:30.927 20:13:00 -- common/autotest_common.sh@641 -- # waitforlisten 63587 00:06:30.927 20:13:00 -- common/autotest_common.sh@817 -- # '[' -z 63587 ']' 00:06:30.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.927 20:13:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.927 20:13:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:30.927 20:13:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.927 20:13:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:30.927 20:13:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.927 ERROR: process (pid: 63587) is no longer running 00:06:30.927 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (63587) - No such process 00:06:30.927 20:13:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:30.927 20:13:00 -- common/autotest_common.sh@850 -- # return 1 00:06:30.927 20:13:00 -- common/autotest_common.sh@641 -- # es=1 00:06:30.927 20:13:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:30.927 20:13:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:30.927 20:13:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:30.927 20:13:00 -- event/cpu_locks.sh@54 -- # no_locks 00:06:30.927 20:13:00 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:30.927 20:13:00 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:30.927 20:13:00 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:30.927 ************************************ 00:06:30.927 END TEST default_locks 00:06:30.927 ************************************ 00:06:30.927 00:06:30.927 real 0m4.312s 00:06:30.927 user 0m4.287s 00:06:30.927 sys 0m0.665s 00:06:30.927 20:13:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:30.927 20:13:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.927 20:13:00 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:30.927 20:13:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:30.927 20:13:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.927 20:13:00 -- common/autotest_common.sh@10 -- # set +x 00:06:30.927 ************************************ 00:06:30.927 START TEST default_locks_via_rpc 00:06:30.927 ************************************ 00:06:30.927 20:13:01 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:06:30.927 20:13:01 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=63667 00:06:30.927 20:13:01 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.927 20:13:01 -- event/cpu_locks.sh@63 -- # waitforlisten 63667 00:06:30.927 20:13:01 -- common/autotest_common.sh@817 -- # '[' -z 63667 ']' 00:06:30.927 20:13:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.927 20:13:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:30.927 20:13:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.927 20:13:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:30.927 20:13:01 -- common/autotest_common.sh@10 -- # set +x 00:06:31.185 [2024-04-24 20:13:01.161239] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:06:31.185 [2024-04-24 20:13:01.161353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63667 ] 00:06:31.185 [2024-04-24 20:13:01.331083] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.444 [2024-04-24 20:13:01.565383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.379 20:13:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:32.380 20:13:02 -- common/autotest_common.sh@850 -- # return 0 00:06:32.380 20:13:02 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:32.380 20:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:32.380 20:13:02 -- common/autotest_common.sh@10 -- # set +x 00:06:32.380 20:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:32.380 20:13:02 -- event/cpu_locks.sh@67 -- # no_locks 00:06:32.380 20:13:02 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:32.380 20:13:02 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:32.380 20:13:02 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:32.380 20:13:02 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:32.380 20:13:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:32.380 20:13:02 -- common/autotest_common.sh@10 -- # set +x 00:06:32.380 20:13:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:32.380 20:13:02 -- event/cpu_locks.sh@71 -- # locks_exist 63667 00:06:32.380 20:13:02 -- event/cpu_locks.sh@22 -- # lslocks -p 63667 00:06:32.380 20:13:02 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.947 20:13:02 -- event/cpu_locks.sh@73 -- # killprocess 63667 00:06:32.947 20:13:02 -- common/autotest_common.sh@936 -- # '[' -z 63667 ']' 00:06:32.947 20:13:02 -- common/autotest_common.sh@940 -- # kill -0 63667 00:06:32.947 20:13:02 -- common/autotest_common.sh@941 -- # uname 00:06:32.947 20:13:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:32.947 20:13:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63667 00:06:32.947 killing process with pid 63667 00:06:32.947 20:13:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:32.947 20:13:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:32.947 20:13:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63667' 00:06:32.947 20:13:02 -- common/autotest_common.sh@955 -- # kill 63667 00:06:32.947 20:13:02 -- common/autotest_common.sh@960 -- # wait 63667 00:06:35.484 00:06:35.484 real 0m4.329s 00:06:35.484 user 0m4.218s 00:06:35.484 sys 0m0.667s 00:06:35.484 20:13:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:35.484 ************************************ 00:06:35.484 END TEST default_locks_via_rpc 00:06:35.484 ************************************ 00:06:35.484 20:13:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.484 20:13:05 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:35.484 20:13:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:35.484 20:13:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:35.484 20:13:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.484 ************************************ 00:06:35.484 START TEST non_locking_app_on_locked_coremask 00:06:35.484 ************************************ 00:06:35.484 20:13:05 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:06:35.484 20:13:05 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=63745 00:06:35.484 20:13:05 -- event/cpu_locks.sh@81 -- # waitforlisten 63745 /var/tmp/spdk.sock 00:06:35.484 20:13:05 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.484 20:13:05 -- common/autotest_common.sh@817 -- # '[' -z 63745 ']' 00:06:35.484 20:13:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.484 20:13:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:35.484 20:13:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.484 20:13:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:35.484 20:13:05 -- common/autotest_common.sh@10 -- # set +x 00:06:35.484 [2024-04-24 20:13:05.624290] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:06:35.484 [2024-04-24 20:13:05.624416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63745 ] 00:06:35.744 [2024-04-24 20:13:05.795902] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.003 [2024-04-24 20:13:06.035233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.942 20:13:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:36.942 20:13:06 -- common/autotest_common.sh@850 -- # return 0 00:06:36.942 20:13:06 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=63772 00:06:36.942 20:13:06 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:36.942 20:13:06 -- event/cpu_locks.sh@85 -- # waitforlisten 63772 /var/tmp/spdk2.sock 00:06:36.942 20:13:06 -- common/autotest_common.sh@817 -- # '[' -z 63772 ']' 00:06:36.942 20:13:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.942 20:13:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:36.942 20:13:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.942 20:13:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:36.942 20:13:06 -- common/autotest_common.sh@10 -- # set +x 00:06:36.942 [2024-04-24 20:13:07.099148] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:06:36.942 [2024-04-24 20:13:07.099516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63772 ] 00:06:37.202 [2024-04-24 20:13:07.284067] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.202 [2024-04-24 20:13:07.284128] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.771 [2024-04-24 20:13:07.751063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.716 20:13:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:39.716 20:13:09 -- common/autotest_common.sh@850 -- # return 0 00:06:39.716 20:13:09 -- event/cpu_locks.sh@87 -- # locks_exist 63745 00:06:39.716 20:13:09 -- event/cpu_locks.sh@22 -- # lslocks -p 63745 00:06:39.716 20:13:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.655 20:13:10 -- event/cpu_locks.sh@89 -- # killprocess 63745 00:06:40.655 20:13:10 -- common/autotest_common.sh@936 -- # '[' -z 63745 ']' 00:06:40.655 20:13:10 -- common/autotest_common.sh@940 -- # kill -0 63745 00:06:40.655 20:13:10 -- common/autotest_common.sh@941 -- # uname 00:06:40.655 20:13:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:40.655 20:13:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63745 00:06:40.655 20:13:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:40.655 20:13:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:40.655 killing process with pid 63745 00:06:40.655 20:13:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63745' 00:06:40.655 20:13:10 -- common/autotest_common.sh@955 -- # kill 63745 00:06:40.655 20:13:10 -- common/autotest_common.sh@960 -- # wait 63745 00:06:45.948 20:13:15 -- event/cpu_locks.sh@90 -- # killprocess 63772 00:06:45.948 20:13:15 -- common/autotest_common.sh@936 -- # '[' -z 63772 ']' 00:06:45.948 20:13:15 -- common/autotest_common.sh@940 -- # kill -0 63772 00:06:45.948 20:13:15 -- common/autotest_common.sh@941 -- # uname 00:06:45.948 20:13:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:45.948 20:13:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63772 00:06:45.948 killing process with pid 63772 00:06:45.948 20:13:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:45.948 20:13:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:45.948 20:13:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63772' 00:06:45.948 20:13:15 -- common/autotest_common.sh@955 -- # kill 63772 00:06:45.948 20:13:15 -- common/autotest_common.sh@960 -- # wait 63772 00:06:47.879 00:06:47.879 real 0m12.320s 00:06:47.879 user 0m12.599s 00:06:47.879 sys 0m1.404s 00:06:47.879 20:13:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:47.879 ************************************ 00:06:47.879 END TEST non_locking_app_on_locked_coremask 00:06:47.879 ************************************ 00:06:47.879 20:13:17 -- common/autotest_common.sh@10 -- # set +x 00:06:47.879 20:13:17 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:47.879 20:13:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.879 20:13:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.879 20:13:17 -- common/autotest_common.sh@10 -- # set +x 00:06:47.879 ************************************ 00:06:47.879 START TEST locking_app_on_unlocked_coremask 00:06:47.879 ************************************ 00:06:47.879 20:13:17 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:06:47.879 20:13:17 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63930 00:06:47.879 20:13:17 -- event/cpu_locks.sh@99 -- # waitforlisten 63930 /var/tmp/spdk.sock 00:06:47.879 20:13:17 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:47.879 20:13:17 -- common/autotest_common.sh@817 -- # '[' -z 63930 ']' 00:06:47.879 20:13:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.879 20:13:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:47.879 20:13:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.879 20:13:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:47.879 20:13:17 -- common/autotest_common.sh@10 -- # set +x 00:06:47.879 [2024-04-24 20:13:18.088780] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:06:47.879 [2024-04-24 20:13:18.088914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63930 ] 00:06:48.138 [2024-04-24 20:13:18.261194] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.138 [2024-04-24 20:13:18.261254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.397 [2024-04-24 20:13:18.493986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.335 20:13:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:49.335 20:13:19 -- common/autotest_common.sh@850 -- # return 0 00:06:49.335 20:13:19 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63947 00:06:49.335 20:13:19 -- event/cpu_locks.sh@103 -- # waitforlisten 63947 /var/tmp/spdk2.sock 00:06:49.335 20:13:19 -- common/autotest_common.sh@817 -- # '[' -z 63947 ']' 00:06:49.335 20:13:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.335 20:13:19 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.336 20:13:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:49.336 20:13:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.336 20:13:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:49.336 20:13:19 -- common/autotest_common.sh@10 -- # set +x 00:06:49.336 [2024-04-24 20:13:19.542155] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:06:49.336 [2024-04-24 20:13:19.542461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63947 ] 00:06:49.594 [2024-04-24 20:13:19.710203] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.161 [2024-04-24 20:13:20.189866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.068 20:13:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:52.068 20:13:22 -- common/autotest_common.sh@850 -- # return 0 00:06:52.068 20:13:22 -- event/cpu_locks.sh@105 -- # locks_exist 63947 00:06:52.068 20:13:22 -- event/cpu_locks.sh@22 -- # lslocks -p 63947 00:06:52.069 20:13:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.071 20:13:23 -- event/cpu_locks.sh@107 -- # killprocess 63930 00:06:53.071 20:13:23 -- common/autotest_common.sh@936 -- # '[' -z 63930 ']' 00:06:53.071 20:13:23 -- common/autotest_common.sh@940 -- # kill -0 63930 00:06:53.071 20:13:23 -- common/autotest_common.sh@941 -- # uname 00:06:53.071 20:13:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:53.071 20:13:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63930 00:06:53.071 killing process with pid 63930 00:06:53.071 20:13:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:53.071 20:13:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:53.071 20:13:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63930' 00:06:53.071 20:13:23 -- common/autotest_common.sh@955 -- # kill 63930 00:06:53.071 20:13:23 -- common/autotest_common.sh@960 -- # wait 63930 00:06:58.348 20:13:27 -- event/cpu_locks.sh@108 -- # killprocess 63947 00:06:58.348 20:13:27 -- common/autotest_common.sh@936 -- # '[' -z 63947 ']' 00:06:58.348 20:13:27 -- common/autotest_common.sh@940 -- # kill -0 63947 00:06:58.348 20:13:27 -- common/autotest_common.sh@941 -- # uname 00:06:58.348 20:13:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.348 20:13:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63947 00:06:58.348 killing process with pid 63947 00:06:58.348 20:13:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:58.348 20:13:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:58.348 20:13:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63947' 00:06:58.348 20:13:27 -- common/autotest_common.sh@955 -- # kill 63947 00:06:58.348 20:13:27 -- common/autotest_common.sh@960 -- # wait 63947 00:07:00.249 00:07:00.249 real 0m12.417s 00:07:00.249 user 0m12.678s 00:07:00.249 sys 0m1.429s 00:07:00.249 20:13:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:00.250 ************************************ 00:07:00.250 END TEST locking_app_on_unlocked_coremask 00:07:00.250 ************************************ 00:07:00.250 20:13:30 -- common/autotest_common.sh@10 -- # set +x 00:07:00.250 20:13:30 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:00.250 20:13:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:00.250 20:13:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:00.250 20:13:30 -- common/autotest_common.sh@10 -- # set +x 00:07:00.508 ************************************ 00:07:00.508 START TEST locking_app_on_locked_coremask 00:07:00.508 ************************************ 00:07:00.508 20:13:30 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:07:00.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.508 20:13:30 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=64110 00:07:00.508 20:13:30 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:00.508 20:13:30 -- event/cpu_locks.sh@116 -- # waitforlisten 64110 /var/tmp/spdk.sock 00:07:00.508 20:13:30 -- common/autotest_common.sh@817 -- # '[' -z 64110 ']' 00:07:00.508 20:13:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.508 20:13:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:00.508 20:13:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.508 20:13:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:00.508 20:13:30 -- common/autotest_common.sh@10 -- # set +x 00:07:00.508 [2024-04-24 20:13:30.662320] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:00.508 [2024-04-24 20:13:30.662442] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64110 ] 00:07:00.767 [2024-04-24 20:13:30.834819] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.027 [2024-04-24 20:13:31.073844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.964 20:13:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:01.964 20:13:32 -- common/autotest_common.sh@850 -- # return 0 00:07:01.964 20:13:32 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:01.964 20:13:32 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=64126 00:07:01.964 20:13:32 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 64126 /var/tmp/spdk2.sock 00:07:01.964 20:13:32 -- common/autotest_common.sh@638 -- # local es=0 00:07:01.964 20:13:32 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 64126 /var/tmp/spdk2.sock 00:07:01.964 20:13:32 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:01.964 20:13:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:01.964 20:13:32 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:01.964 20:13:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:01.964 20:13:32 -- common/autotest_common.sh@641 -- # waitforlisten 64126 /var/tmp/spdk2.sock 00:07:01.964 20:13:32 -- common/autotest_common.sh@817 -- # '[' -z 64126 ']' 00:07:01.964 20:13:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.964 20:13:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:01.964 20:13:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.964 20:13:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:01.964 20:13:32 -- common/autotest_common.sh@10 -- # set +x 00:07:01.964 [2024-04-24 20:13:32.121153] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:01.964 [2024-04-24 20:13:32.121671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64126 ] 00:07:02.223 [2024-04-24 20:13:32.290346] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 64110 has claimed it. 00:07:02.223 [2024-04-24 20:13:32.290429] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:02.790 ERROR: process (pid: 64126) is no longer running 00:07:02.790 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (64126) - No such process 00:07:02.790 20:13:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:02.790 20:13:32 -- common/autotest_common.sh@850 -- # return 1 00:07:02.790 20:13:32 -- common/autotest_common.sh@641 -- # es=1 00:07:02.790 20:13:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:02.790 20:13:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:02.790 20:13:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:02.790 20:13:32 -- event/cpu_locks.sh@122 -- # locks_exist 64110 00:07:02.790 20:13:32 -- event/cpu_locks.sh@22 -- # lslocks -p 64110 00:07:02.790 20:13:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.049 20:13:33 -- event/cpu_locks.sh@124 -- # killprocess 64110 00:07:03.049 20:13:33 -- common/autotest_common.sh@936 -- # '[' -z 64110 ']' 00:07:03.049 20:13:33 -- common/autotest_common.sh@940 -- # kill -0 64110 00:07:03.049 20:13:33 -- common/autotest_common.sh@941 -- # uname 00:07:03.049 20:13:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:03.049 20:13:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64110 00:07:03.049 20:13:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:03.049 killing process with pid 64110 00:07:03.049 20:13:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:03.049 20:13:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64110' 00:07:03.049 20:13:33 -- common/autotest_common.sh@955 -- # kill 64110 00:07:03.049 20:13:33 -- common/autotest_common.sh@960 -- # wait 64110 00:07:05.584 ************************************ 00:07:05.584 END TEST locking_app_on_locked_coremask 00:07:05.584 ************************************ 00:07:05.584 00:07:05.584 real 0m5.144s 00:07:05.584 user 0m5.240s 00:07:05.584 sys 0m0.865s 00:07:05.584 20:13:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.584 20:13:35 -- common/autotest_common.sh@10 -- # set +x 00:07:05.584 20:13:35 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:05.584 20:13:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.584 20:13:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.584 20:13:35 -- common/autotest_common.sh@10 -- # set +x 00:07:05.842 ************************************ 00:07:05.842 START TEST locking_overlapped_coremask 00:07:05.842 ************************************ 00:07:05.842 20:13:35 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:07:05.842 20:13:35 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=64209 00:07:05.842 20:13:35 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:05.842 20:13:35 -- event/cpu_locks.sh@133 -- # waitforlisten 64209 /var/tmp/spdk.sock 00:07:05.842 20:13:35 -- common/autotest_common.sh@817 -- # '[' -z 64209 ']' 00:07:05.842 20:13:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.842 20:13:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:05.842 20:13:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.842 20:13:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:05.842 20:13:35 -- common/autotest_common.sh@10 -- # set +x 00:07:05.842 [2024-04-24 20:13:35.958919] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:05.842 [2024-04-24 20:13:35.959232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64209 ] 00:07:06.099 [2024-04-24 20:13:36.116689] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.357 [2024-04-24 20:13:36.392095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.357 [2024-04-24 20:13:36.392180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.357 [2024-04-24 20:13:36.392208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.295 20:13:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:07.295 20:13:37 -- common/autotest_common.sh@850 -- # return 0 00:07:07.295 20:13:37 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=64227 00:07:07.295 20:13:37 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:07.295 20:13:37 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 64227 /var/tmp/spdk2.sock 00:07:07.295 20:13:37 -- common/autotest_common.sh@638 -- # local es=0 00:07:07.295 20:13:37 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 64227 /var/tmp/spdk2.sock 00:07:07.295 20:13:37 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:07.295 20:13:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:07.295 20:13:37 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:07.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.295 20:13:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:07.295 20:13:37 -- common/autotest_common.sh@641 -- # waitforlisten 64227 /var/tmp/spdk2.sock 00:07:07.295 20:13:37 -- common/autotest_common.sh@817 -- # '[' -z 64227 ']' 00:07:07.295 20:13:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.295 20:13:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:07.295 20:13:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.295 20:13:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:07.295 20:13:37 -- common/autotest_common.sh@10 -- # set +x 00:07:07.295 [2024-04-24 20:13:37.495506] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:07.295 [2024-04-24 20:13:37.495622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64227 ] 00:07:07.554 [2024-04-24 20:13:37.666834] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64209 has claimed it. 00:07:07.554 [2024-04-24 20:13:37.666921] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:08.123 ERROR: process (pid: 64227) is no longer running 00:07:08.123 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (64227) - No such process 00:07:08.123 20:13:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:08.123 20:13:38 -- common/autotest_common.sh@850 -- # return 1 00:07:08.123 20:13:38 -- common/autotest_common.sh@641 -- # es=1 00:07:08.123 20:13:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:08.123 20:13:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:08.123 20:13:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:08.123 20:13:38 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:08.123 20:13:38 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.123 20:13:38 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.123 20:13:38 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.123 20:13:38 -- event/cpu_locks.sh@141 -- # killprocess 64209 00:07:08.123 20:13:38 -- common/autotest_common.sh@936 -- # '[' -z 64209 ']' 00:07:08.123 20:13:38 -- common/autotest_common.sh@940 -- # kill -0 64209 00:07:08.123 20:13:38 -- common/autotest_common.sh@941 -- # uname 00:07:08.123 20:13:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:08.123 20:13:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64209 00:07:08.123 killing process with pid 64209 00:07:08.123 20:13:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:08.123 20:13:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:08.123 20:13:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64209' 00:07:08.123 20:13:38 -- common/autotest_common.sh@955 -- # kill 64209 00:07:08.123 20:13:38 -- common/autotest_common.sh@960 -- # wait 64209 00:07:10.658 ************************************ 00:07:10.658 END TEST locking_overlapped_coremask 00:07:10.658 ************************************ 00:07:10.658 00:07:10.658 real 0m4.757s 00:07:10.658 user 0m12.350s 00:07:10.658 sys 0m0.646s 00:07:10.658 20:13:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:10.658 20:13:40 -- common/autotest_common.sh@10 -- # set +x 00:07:10.658 20:13:40 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:10.658 20:13:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.658 20:13:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.658 20:13:40 -- common/autotest_common.sh@10 -- # set +x 00:07:10.658 ************************************ 00:07:10.658 START TEST locking_overlapped_coremask_via_rpc 00:07:10.658 ************************************ 00:07:10.658 20:13:40 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:07:10.658 20:13:40 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=64301 00:07:10.658 20:13:40 -- event/cpu_locks.sh@149 -- # waitforlisten 64301 /var/tmp/spdk.sock 00:07:10.658 20:13:40 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:10.658 20:13:40 -- common/autotest_common.sh@817 -- # '[' -z 64301 ']' 00:07:10.658 20:13:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.658 20:13:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:10.658 20:13:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.658 20:13:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:10.658 20:13:40 -- common/autotest_common.sh@10 -- # set +x 00:07:10.917 [2024-04-24 20:13:40.900959] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:10.917 [2024-04-24 20:13:40.901116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64301 ] 00:07:10.917 [2024-04-24 20:13:41.080086] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.917 [2024-04-24 20:13:41.080145] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.176 [2024-04-24 20:13:41.318994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.176 [2024-04-24 20:13:41.319147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.176 [2024-04-24 20:13:41.319186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.113 20:13:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:12.113 20:13:42 -- common/autotest_common.sh@850 -- # return 0 00:07:12.113 20:13:42 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:12.113 20:13:42 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=64324 00:07:12.113 20:13:42 -- event/cpu_locks.sh@153 -- # waitforlisten 64324 /var/tmp/spdk2.sock 00:07:12.113 20:13:42 -- common/autotest_common.sh@817 -- # '[' -z 64324 ']' 00:07:12.113 20:13:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.113 20:13:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:12.113 20:13:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.113 20:13:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:12.113 20:13:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.373 [2024-04-24 20:13:42.386602] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:12.373 [2024-04-24 20:13:42.386933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64324 ] 00:07:12.373 [2024-04-24 20:13:42.554636] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:12.373 [2024-04-24 20:13:42.554698] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:13.076 [2024-04-24 20:13:43.053115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.076 [2024-04-24 20:13:43.053279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.076 [2024-04-24 20:13:43.053303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:14.981 20:13:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:14.981 20:13:44 -- common/autotest_common.sh@850 -- # return 0 00:07:14.981 20:13:44 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:14.981 20:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.981 20:13:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.981 20:13:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:14.981 20:13:44 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.981 20:13:44 -- common/autotest_common.sh@638 -- # local es=0 00:07:14.981 20:13:44 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.981 20:13:44 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:07:14.981 20:13:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:14.981 20:13:44 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:07:14.981 20:13:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:14.981 20:13:44 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:14.981 20:13:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:14.981 20:13:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.981 [2024-04-24 20:13:44.974070] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 64301 has claimed it. 00:07:14.981 request: 00:07:14.981 { 00:07:14.981 "method": "framework_enable_cpumask_locks", 00:07:14.981 "req_id": 1 00:07:14.981 } 00:07:14.981 Got JSON-RPC error response 00:07:14.982 response: 00:07:14.982 { 00:07:14.982 "code": -32603, 00:07:14.982 "message": "Failed to claim CPU core: 2" 00:07:14.982 } 00:07:14.982 20:13:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:07:14.982 20:13:44 -- common/autotest_common.sh@641 -- # es=1 00:07:14.982 20:13:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:14.982 20:13:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:14.982 20:13:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:14.982 20:13:44 -- event/cpu_locks.sh@158 -- # waitforlisten 64301 /var/tmp/spdk.sock 00:07:14.982 20:13:44 -- common/autotest_common.sh@817 -- # '[' -z 64301 ']' 00:07:14.982 20:13:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.982 20:13:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:14.982 20:13:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.982 20:13:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:14.982 20:13:44 -- common/autotest_common.sh@10 -- # set +x 00:07:14.982 20:13:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:14.982 20:13:45 -- common/autotest_common.sh@850 -- # return 0 00:07:14.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.982 20:13:45 -- event/cpu_locks.sh@159 -- # waitforlisten 64324 /var/tmp/spdk2.sock 00:07:14.982 20:13:45 -- common/autotest_common.sh@817 -- # '[' -z 64324 ']' 00:07:14.982 20:13:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.982 20:13:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:14.982 20:13:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.982 20:13:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:14.982 20:13:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.241 ************************************ 00:07:15.241 END TEST locking_overlapped_coremask_via_rpc 00:07:15.241 ************************************ 00:07:15.241 20:13:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:15.241 20:13:45 -- common/autotest_common.sh@850 -- # return 0 00:07:15.241 20:13:45 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:15.241 20:13:45 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:15.241 20:13:45 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:15.241 20:13:45 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:15.241 00:07:15.241 real 0m4.602s 00:07:15.241 user 0m1.167s 00:07:15.241 sys 0m0.242s 00:07:15.241 20:13:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:15.241 20:13:45 -- common/autotest_common.sh@10 -- # set +x 00:07:15.241 20:13:45 -- event/cpu_locks.sh@174 -- # cleanup 00:07:15.241 20:13:45 -- event/cpu_locks.sh@15 -- # [[ -z 64301 ]] 00:07:15.241 20:13:45 -- event/cpu_locks.sh@15 -- # killprocess 64301 00:07:15.241 20:13:45 -- common/autotest_common.sh@936 -- # '[' -z 64301 ']' 00:07:15.241 20:13:45 -- common/autotest_common.sh@940 -- # kill -0 64301 00:07:15.241 20:13:45 -- common/autotest_common.sh@941 -- # uname 00:07:15.241 20:13:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:15.241 20:13:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64301 00:07:15.241 20:13:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:15.241 20:13:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:15.501 20:13:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64301' 00:07:15.501 killing process with pid 64301 00:07:15.501 20:13:45 -- common/autotest_common.sh@955 -- # kill 64301 00:07:15.501 20:13:45 -- common/autotest_common.sh@960 -- # wait 64301 00:07:18.037 20:13:47 -- event/cpu_locks.sh@16 -- # [[ -z 64324 ]] 00:07:18.037 20:13:47 -- event/cpu_locks.sh@16 -- # killprocess 64324 00:07:18.037 20:13:47 -- common/autotest_common.sh@936 -- # '[' -z 64324 ']' 00:07:18.037 20:13:47 -- common/autotest_common.sh@940 -- # kill -0 64324 00:07:18.037 20:13:47 -- common/autotest_common.sh@941 -- # uname 00:07:18.037 20:13:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:18.037 20:13:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64324 00:07:18.037 killing process with pid 64324 00:07:18.037 20:13:47 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:18.037 20:13:47 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:18.037 20:13:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64324' 00:07:18.037 20:13:47 -- common/autotest_common.sh@955 -- # kill 64324 00:07:18.037 20:13:47 -- common/autotest_common.sh@960 -- # wait 64324 00:07:20.572 20:13:50 -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.572 Process with pid 64301 is not found 00:07:20.572 20:13:50 -- event/cpu_locks.sh@1 -- # cleanup 00:07:20.572 20:13:50 -- event/cpu_locks.sh@15 -- # [[ -z 64301 ]] 00:07:20.572 20:13:50 -- event/cpu_locks.sh@15 -- # killprocess 64301 00:07:20.572 20:13:50 -- common/autotest_common.sh@936 -- # '[' -z 64301 ']' 00:07:20.572 20:13:50 -- common/autotest_common.sh@940 -- # kill -0 64301 00:07:20.572 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (64301) - No such process 00:07:20.572 20:13:50 -- common/autotest_common.sh@963 -- # echo 'Process with pid 64301 is not found' 00:07:20.572 20:13:50 -- event/cpu_locks.sh@16 -- # [[ -z 64324 ]] 00:07:20.572 20:13:50 -- event/cpu_locks.sh@16 -- # killprocess 64324 00:07:20.572 20:13:50 -- common/autotest_common.sh@936 -- # '[' -z 64324 ']' 00:07:20.572 Process with pid 64324 is not found 00:07:20.572 20:13:50 -- common/autotest_common.sh@940 -- # kill -0 64324 00:07:20.572 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (64324) - No such process 00:07:20.572 20:13:50 -- common/autotest_common.sh@963 -- # echo 'Process with pid 64324 is not found' 00:07:20.572 20:13:50 -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.572 ************************************ 00:07:20.572 END TEST cpu_locks 00:07:20.572 ************************************ 00:07:20.572 00:07:20.572 real 0m54.003s 00:07:20.572 user 1m28.014s 00:07:20.572 sys 0m7.419s 00:07:20.572 20:13:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:20.572 20:13:50 -- common/autotest_common.sh@10 -- # set +x 00:07:20.572 ************************************ 00:07:20.572 END TEST event 00:07:20.572 ************************************ 00:07:20.572 00:07:20.572 real 1m27.074s 00:07:20.572 user 2m31.128s 00:07:20.572 sys 0m11.960s 00:07:20.572 20:13:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:20.572 20:13:50 -- common/autotest_common.sh@10 -- # set +x 00:07:20.572 20:13:50 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:20.572 20:13:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:20.572 20:13:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.572 20:13:50 -- common/autotest_common.sh@10 -- # set +x 00:07:20.572 ************************************ 00:07:20.572 START TEST thread 00:07:20.572 ************************************ 00:07:20.572 20:13:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:20.572 * Looking for test storage... 00:07:20.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:20.572 20:13:50 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.572 20:13:50 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:20.572 20:13:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.572 20:13:50 -- common/autotest_common.sh@10 -- # set +x 00:07:20.832 ************************************ 00:07:20.832 START TEST thread_poller_perf 00:07:20.832 ************************************ 00:07:20.832 20:13:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.832 [2024-04-24 20:13:50.913510] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:20.832 [2024-04-24 20:13:50.913768] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64526 ] 00:07:21.091 [2024-04-24 20:13:51.081842] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.091 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:21.091 [2024-04-24 20:13:51.315346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.000 ====================================== 00:07:23.000 busy:2497530754 (cyc) 00:07:23.000 total_run_count: 385000 00:07:23.000 tsc_hz: 2490000000 (cyc) 00:07:23.000 ====================================== 00:07:23.000 poller_cost: 6487 (cyc), 2605 (nsec) 00:07:23.000 00:07:23.000 real 0m1.882s 00:07:23.000 user 0m1.653s 00:07:23.000 sys 0m0.119s 00:07:23.000 ************************************ 00:07:23.000 END TEST thread_poller_perf 00:07:23.000 ************************************ 00:07:23.000 20:13:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.000 20:13:52 -- common/autotest_common.sh@10 -- # set +x 00:07:23.000 20:13:52 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:23.000 20:13:52 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:23.000 20:13:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.000 20:13:52 -- common/autotest_common.sh@10 -- # set +x 00:07:23.000 ************************************ 00:07:23.000 START TEST thread_poller_perf 00:07:23.000 ************************************ 00:07:23.000 20:13:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:23.000 [2024-04-24 20:13:52.954210] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:23.000 [2024-04-24 20:13:52.954325] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64568 ] 00:07:23.000 [2024-04-24 20:13:53.126103] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.259 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:23.259 [2024-04-24 20:13:53.370596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.635 ====================================== 00:07:24.635 busy:2493963528 (cyc) 00:07:24.635 total_run_count: 5096000 00:07:24.635 tsc_hz: 2490000000 (cyc) 00:07:24.635 ====================================== 00:07:24.635 poller_cost: 489 (cyc), 196 (nsec) 00:07:24.635 00:07:24.635 real 0m1.895s 00:07:24.635 user 0m1.655s 00:07:24.635 sys 0m0.131s 00:07:24.635 20:13:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:24.635 ************************************ 00:07:24.635 20:13:54 -- common/autotest_common.sh@10 -- # set +x 00:07:24.635 END TEST thread_poller_perf 00:07:24.635 ************************************ 00:07:24.635 20:13:54 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:24.635 00:07:24.635 real 0m4.240s 00:07:24.635 user 0m3.472s 00:07:24.635 sys 0m0.516s 00:07:24.635 20:13:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:24.635 ************************************ 00:07:24.635 END TEST thread 00:07:24.635 ************************************ 00:07:24.635 20:13:54 -- common/autotest_common.sh@10 -- # set +x 00:07:24.895 20:13:54 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:24.895 20:13:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:24.895 20:13:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.895 20:13:54 -- common/autotest_common.sh@10 -- # set +x 00:07:24.895 ************************************ 00:07:24.895 START TEST accel 00:07:24.895 ************************************ 00:07:24.895 20:13:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:25.153 * Looking for test storage... 00:07:25.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:25.153 20:13:55 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:25.153 20:13:55 -- accel/accel.sh@82 -- # get_expected_opcs 00:07:25.153 20:13:55 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:25.153 20:13:55 -- accel/accel.sh@62 -- # spdk_tgt_pid=64659 00:07:25.153 20:13:55 -- accel/accel.sh@63 -- # waitforlisten 64659 00:07:25.153 20:13:55 -- common/autotest_common.sh@817 -- # '[' -z 64659 ']' 00:07:25.153 20:13:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.153 20:13:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:25.153 20:13:55 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:25.153 20:13:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.153 20:13:55 -- accel/accel.sh@61 -- # build_accel_config 00:07:25.153 20:13:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:25.153 20:13:55 -- common/autotest_common.sh@10 -- # set +x 00:07:25.153 20:13:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.153 20:13:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.153 20:13:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.153 20:13:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.153 20:13:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.153 20:13:55 -- accel/accel.sh@40 -- # local IFS=, 00:07:25.153 20:13:55 -- accel/accel.sh@41 -- # jq -r . 00:07:25.153 [2024-04-24 20:13:55.263513] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:25.153 [2024-04-24 20:13:55.263629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64659 ] 00:07:25.412 [2024-04-24 20:13:55.434818] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.672 [2024-04-24 20:13:55.676951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.610 20:13:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:26.610 20:13:56 -- common/autotest_common.sh@850 -- # return 0 00:07:26.610 20:13:56 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:26.610 20:13:56 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:26.610 20:13:56 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:26.610 20:13:56 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:26.610 20:13:56 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:26.610 20:13:56 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:26.610 20:13:56 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:26.610 20:13:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:26.610 20:13:56 -- common/autotest_common.sh@10 -- # set +x 00:07:26.610 20:13:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:26.610 20:13:56 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # IFS== 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # read -r opc module 00:07:26.610 20:13:56 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.610 20:13:56 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # IFS== 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # read -r opc module 00:07:26.610 20:13:56 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.610 20:13:56 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # IFS== 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # read -r opc module 00:07:26.610 20:13:56 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.610 20:13:56 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # IFS== 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # read -r opc module 00:07:26.610 20:13:56 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.610 20:13:56 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # IFS== 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # read -r opc module 00:07:26.610 20:13:56 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.610 20:13:56 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # IFS== 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # read -r opc module 00:07:26.610 20:13:56 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.610 20:13:56 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # IFS== 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # read -r opc module 00:07:26.610 20:13:56 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.610 20:13:56 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # IFS== 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # read -r opc module 00:07:26.610 20:13:56 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.610 20:13:56 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # IFS== 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # read -r opc module 00:07:26.610 20:13:56 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.610 20:13:56 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # IFS== 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # read -r opc module 00:07:26.610 20:13:56 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.610 20:13:56 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # IFS== 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # read -r opc module 00:07:26.610 20:13:56 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.610 20:13:56 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # IFS== 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # read -r opc module 00:07:26.610 20:13:56 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.610 20:13:56 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # IFS== 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # read -r opc module 00:07:26.610 20:13:56 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.610 20:13:56 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # IFS== 00:07:26.610 20:13:56 -- accel/accel.sh@72 -- # read -r opc module 00:07:26.610 20:13:56 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:26.610 20:13:56 -- accel/accel.sh@75 -- # killprocess 64659 00:07:26.610 20:13:56 -- common/autotest_common.sh@936 -- # '[' -z 64659 ']' 00:07:26.610 20:13:56 -- common/autotest_common.sh@940 -- # kill -0 64659 00:07:26.610 20:13:56 -- common/autotest_common.sh@941 -- # uname 00:07:26.610 20:13:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:26.610 20:13:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64659 00:07:26.610 killing process with pid 64659 00:07:26.610 20:13:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:26.610 20:13:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:26.610 20:13:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64659' 00:07:26.610 20:13:56 -- common/autotest_common.sh@955 -- # kill 64659 00:07:26.610 20:13:56 -- common/autotest_common.sh@960 -- # wait 64659 00:07:29.152 20:13:59 -- accel/accel.sh@76 -- # trap - ERR 00:07:29.152 20:13:59 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:29.152 20:13:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:29.152 20:13:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.152 20:13:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.152 20:13:59 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:07:29.152 20:13:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:29.152 20:13:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.152 20:13:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.152 20:13:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.152 20:13:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.152 20:13:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.152 20:13:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.152 20:13:59 -- accel/accel.sh@40 -- # local IFS=, 00:07:29.152 20:13:59 -- accel/accel.sh@41 -- # jq -r . 00:07:29.152 20:13:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:29.152 20:13:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.152 20:13:59 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:29.152 20:13:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:29.152 20:13:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.152 20:13:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.409 ************************************ 00:07:29.410 START TEST accel_missing_filename 00:07:29.410 ************************************ 00:07:29.410 20:13:59 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:07:29.410 20:13:59 -- common/autotest_common.sh@638 -- # local es=0 00:07:29.410 20:13:59 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:29.410 20:13:59 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:29.410 20:13:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:29.410 20:13:59 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:29.410 20:13:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:29.410 20:13:59 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:07:29.410 20:13:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:29.410 20:13:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.410 20:13:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.410 20:13:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.410 20:13:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.410 20:13:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.410 20:13:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.410 20:13:59 -- accel/accel.sh@40 -- # local IFS=, 00:07:29.410 20:13:59 -- accel/accel.sh@41 -- # jq -r . 00:07:29.410 [2024-04-24 20:13:59.508012] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:29.410 [2024-04-24 20:13:59.508258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64749 ] 00:07:29.667 [2024-04-24 20:13:59.675817] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.925 [2024-04-24 20:13:59.911146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.183 [2024-04-24 20:14:00.158060] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:30.749 [2024-04-24 20:14:00.712978] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:31.008 A filename is required. 00:07:31.008 20:14:01 -- common/autotest_common.sh@641 -- # es=234 00:07:31.008 20:14:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:31.008 20:14:01 -- common/autotest_common.sh@650 -- # es=106 00:07:31.008 20:14:01 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:31.008 20:14:01 -- common/autotest_common.sh@658 -- # es=1 00:07:31.008 20:14:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:31.008 00:07:31.008 real 0m1.699s 00:07:31.008 user 0m1.443s 00:07:31.008 sys 0m0.186s 00:07:31.008 20:14:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.008 ************************************ 00:07:31.008 END TEST accel_missing_filename 00:07:31.008 ************************************ 00:07:31.008 20:14:01 -- common/autotest_common.sh@10 -- # set +x 00:07:31.008 20:14:01 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.008 20:14:01 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:31.008 20:14:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.008 20:14:01 -- common/autotest_common.sh@10 -- # set +x 00:07:31.266 ************************************ 00:07:31.266 START TEST accel_compress_verify 00:07:31.266 ************************************ 00:07:31.266 20:14:01 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.266 20:14:01 -- common/autotest_common.sh@638 -- # local es=0 00:07:31.266 20:14:01 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.266 20:14:01 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:31.266 20:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:31.266 20:14:01 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:31.266 20:14:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:31.266 20:14:01 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.266 20:14:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:31.266 20:14:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.266 20:14:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.266 20:14:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.266 20:14:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.266 20:14:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.266 20:14:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.266 20:14:01 -- accel/accel.sh@40 -- # local IFS=, 00:07:31.266 20:14:01 -- accel/accel.sh@41 -- # jq -r . 00:07:31.266 [2024-04-24 20:14:01.343163] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:31.266 [2024-04-24 20:14:01.343329] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64790 ] 00:07:31.527 [2024-04-24 20:14:01.514119] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.527 [2024-04-24 20:14:01.755120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.785 [2024-04-24 20:14:02.006447] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.350 [2024-04-24 20:14:02.552380] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:32.917 00:07:32.917 Compression does not support the verify option, aborting. 00:07:32.917 20:14:02 -- common/autotest_common.sh@641 -- # es=161 00:07:32.917 20:14:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:32.917 20:14:02 -- common/autotest_common.sh@650 -- # es=33 00:07:32.917 20:14:02 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:32.917 20:14:02 -- common/autotest_common.sh@658 -- # es=1 00:07:32.917 20:14:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:32.917 00:07:32.917 real 0m1.707s 00:07:32.917 user 0m1.461s 00:07:32.917 sys 0m0.185s 00:07:32.917 ************************************ 00:07:32.917 END TEST accel_compress_verify 00:07:32.917 ************************************ 00:07:32.917 20:14:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:32.917 20:14:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.917 20:14:03 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:32.917 20:14:03 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:32.917 20:14:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.917 20:14:03 -- common/autotest_common.sh@10 -- # set +x 00:07:32.917 ************************************ 00:07:32.917 START TEST accel_wrong_workload 00:07:32.917 ************************************ 00:07:32.917 20:14:03 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:07:32.917 20:14:03 -- common/autotest_common.sh@638 -- # local es=0 00:07:32.917 20:14:03 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:32.917 20:14:03 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:32.917 20:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:32.917 20:14:03 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:32.917 20:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:32.917 20:14:03 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:07:32.917 20:14:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.917 20:14:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:32.917 20:14:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.917 20:14:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.917 20:14:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.917 20:14:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.917 20:14:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.917 20:14:03 -- accel/accel.sh@40 -- # local IFS=, 00:07:32.917 20:14:03 -- accel/accel.sh@41 -- # jq -r . 00:07:33.175 Unsupported workload type: foobar 00:07:33.175 [2024-04-24 20:14:03.194174] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:33.175 accel_perf options: 00:07:33.175 [-h help message] 00:07:33.175 [-q queue depth per core] 00:07:33.175 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:33.175 [-T number of threads per core 00:07:33.175 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:33.175 [-t time in seconds] 00:07:33.175 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:33.175 [ dif_verify, , dif_generate, dif_generate_copy 00:07:33.175 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:33.175 [-l for compress/decompress workloads, name of uncompressed input file 00:07:33.175 [-S for crc32c workload, use this seed value (default 0) 00:07:33.175 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:33.175 [-f for fill workload, use this BYTE value (default 255) 00:07:33.175 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:33.175 [-y verify result if this switch is on] 00:07:33.175 [-a tasks to allocate per core (default: same value as -q)] 00:07:33.175 Can be used to spread operations across a wider range of memory. 00:07:33.175 ************************************ 00:07:33.175 END TEST accel_wrong_workload 00:07:33.175 ************************************ 00:07:33.175 20:14:03 -- common/autotest_common.sh@641 -- # es=1 00:07:33.175 20:14:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:33.175 20:14:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:33.175 20:14:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:33.175 00:07:33.175 real 0m0.097s 00:07:33.175 user 0m0.089s 00:07:33.175 sys 0m0.050s 00:07:33.175 20:14:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.175 20:14:03 -- common/autotest_common.sh@10 -- # set +x 00:07:33.175 20:14:03 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:33.175 20:14:03 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:33.175 20:14:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.175 20:14:03 -- common/autotest_common.sh@10 -- # set +x 00:07:33.175 ************************************ 00:07:33.175 START TEST accel_negative_buffers 00:07:33.175 ************************************ 00:07:33.175 20:14:03 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:33.175 20:14:03 -- common/autotest_common.sh@638 -- # local es=0 00:07:33.175 20:14:03 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:33.175 20:14:03 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:33.175 20:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:33.175 20:14:03 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:33.175 20:14:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:33.175 20:14:03 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:07:33.175 20:14:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:33.175 20:14:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.175 20:14:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.175 20:14:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.175 20:14:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.175 20:14:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.175 20:14:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.175 20:14:03 -- accel/accel.sh@40 -- # local IFS=, 00:07:33.175 20:14:03 -- accel/accel.sh@41 -- # jq -r . 00:07:33.436 -x option must be non-negative. 00:07:33.436 [2024-04-24 20:14:03.428838] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:33.436 accel_perf options: 00:07:33.436 [-h help message] 00:07:33.436 [-q queue depth per core] 00:07:33.436 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:33.436 [-T number of threads per core 00:07:33.436 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:33.436 [-t time in seconds] 00:07:33.436 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:33.436 [ dif_verify, , dif_generate, dif_generate_copy 00:07:33.436 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:33.436 [-l for compress/decompress workloads, name of uncompressed input file 00:07:33.436 [-S for crc32c workload, use this seed value (default 0) 00:07:33.436 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:33.436 [-f for fill workload, use this BYTE value (default 255) 00:07:33.436 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:33.436 [-y verify result if this switch is on] 00:07:33.436 [-a tasks to allocate per core (default: same value as -q)] 00:07:33.436 Can be used to spread operations across a wider range of memory. 00:07:33.436 ************************************ 00:07:33.436 END TEST accel_negative_buffers 00:07:33.436 ************************************ 00:07:33.436 20:14:03 -- common/autotest_common.sh@641 -- # es=1 00:07:33.436 20:14:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:33.436 20:14:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:33.436 20:14:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:33.436 00:07:33.436 real 0m0.096s 00:07:33.436 user 0m0.082s 00:07:33.436 sys 0m0.058s 00:07:33.436 20:14:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.436 20:14:03 -- common/autotest_common.sh@10 -- # set +x 00:07:33.436 20:14:03 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:33.436 20:14:03 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:33.436 20:14:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.436 20:14:03 -- common/autotest_common.sh@10 -- # set +x 00:07:33.436 ************************************ 00:07:33.436 START TEST accel_crc32c 00:07:33.436 ************************************ 00:07:33.436 20:14:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:33.436 20:14:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.436 20:14:03 -- accel/accel.sh@17 -- # local accel_module 00:07:33.436 20:14:03 -- accel/accel.sh@19 -- # IFS=: 00:07:33.436 20:14:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:33.436 20:14:03 -- accel/accel.sh@19 -- # read -r var val 00:07:33.436 20:14:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:33.436 20:14:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.436 20:14:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.436 20:14:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.436 20:14:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.436 20:14:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.436 20:14:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.436 20:14:03 -- accel/accel.sh@40 -- # local IFS=, 00:07:33.436 20:14:03 -- accel/accel.sh@41 -- # jq -r . 00:07:33.436 [2024-04-24 20:14:03.666463] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:33.437 [2024-04-24 20:14:03.666597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64886 ] 00:07:33.694 [2024-04-24 20:14:03.839111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.953 [2024-04-24 20:14:04.135656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val= 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val= 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val=0x1 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val= 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val= 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val=crc32c 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val=32 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val= 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val=software 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@22 -- # accel_module=software 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val=32 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val=32 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val=1 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val=Yes 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val= 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:34.214 20:14:04 -- accel/accel.sh@20 -- # val= 00:07:34.214 20:14:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # IFS=: 00:07:34.214 20:14:04 -- accel/accel.sh@19 -- # read -r var val 00:07:36.751 20:14:06 -- accel/accel.sh@20 -- # val= 00:07:36.751 20:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.751 20:14:06 -- accel/accel.sh@19 -- # IFS=: 00:07:36.751 20:14:06 -- accel/accel.sh@19 -- # read -r var val 00:07:36.751 20:14:06 -- accel/accel.sh@20 -- # val= 00:07:36.751 20:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.751 20:14:06 -- accel/accel.sh@19 -- # IFS=: 00:07:36.751 20:14:06 -- accel/accel.sh@19 -- # read -r var val 00:07:36.751 20:14:06 -- accel/accel.sh@20 -- # val= 00:07:36.751 20:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.751 20:14:06 -- accel/accel.sh@19 -- # IFS=: 00:07:36.751 20:14:06 -- accel/accel.sh@19 -- # read -r var val 00:07:36.751 20:14:06 -- accel/accel.sh@20 -- # val= 00:07:36.751 20:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.751 20:14:06 -- accel/accel.sh@19 -- # IFS=: 00:07:36.751 20:14:06 -- accel/accel.sh@19 -- # read -r var val 00:07:36.751 20:14:06 -- accel/accel.sh@20 -- # val= 00:07:36.751 20:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.751 20:14:06 -- accel/accel.sh@19 -- # IFS=: 00:07:36.751 20:14:06 -- accel/accel.sh@19 -- # read -r var val 00:07:36.751 20:14:06 -- accel/accel.sh@20 -- # val= 00:07:36.751 20:14:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.751 20:14:06 -- accel/accel.sh@19 -- # IFS=: 00:07:36.751 20:14:06 -- accel/accel.sh@19 -- # read -r var val 00:07:36.751 20:14:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.751 20:14:06 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:36.751 20:14:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.751 ************************************ 00:07:36.751 END TEST accel_crc32c 00:07:36.751 ************************************ 00:07:36.751 00:07:36.751 real 0m2.919s 00:07:36.751 user 0m2.559s 00:07:36.751 sys 0m0.267s 00:07:36.751 20:14:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:36.751 20:14:06 -- common/autotest_common.sh@10 -- # set +x 00:07:36.751 20:14:06 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:36.751 20:14:06 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:36.751 20:14:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:36.751 20:14:06 -- common/autotest_common.sh@10 -- # set +x 00:07:36.751 ************************************ 00:07:36.751 START TEST accel_crc32c_C2 00:07:36.751 ************************************ 00:07:36.751 20:14:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:36.751 20:14:06 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.751 20:14:06 -- accel/accel.sh@17 -- # local accel_module 00:07:36.751 20:14:06 -- accel/accel.sh@19 -- # IFS=: 00:07:36.751 20:14:06 -- accel/accel.sh@19 -- # read -r var val 00:07:36.751 20:14:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:36.751 20:14:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:36.751 20:14:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.751 20:14:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.751 20:14:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.751 20:14:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.751 20:14:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.751 20:14:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.751 20:14:06 -- accel/accel.sh@40 -- # local IFS=, 00:07:36.751 20:14:06 -- accel/accel.sh@41 -- # jq -r . 00:07:36.751 [2024-04-24 20:14:06.746422] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:36.751 [2024-04-24 20:14:06.746547] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64938 ] 00:07:36.751 [2024-04-24 20:14:06.921095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.013 [2024-04-24 20:14:07.217014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.276 20:14:07 -- accel/accel.sh@20 -- # val= 00:07:37.276 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.276 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.276 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.276 20:14:07 -- accel/accel.sh@20 -- # val= 00:07:37.276 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.276 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.276 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.276 20:14:07 -- accel/accel.sh@20 -- # val=0x1 00:07:37.276 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.276 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.276 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.276 20:14:07 -- accel/accel.sh@20 -- # val= 00:07:37.276 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.276 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.276 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.276 20:14:07 -- accel/accel.sh@20 -- # val= 00:07:37.542 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.542 20:14:07 -- accel/accel.sh@20 -- # val=crc32c 00:07:37.542 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.542 20:14:07 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.542 20:14:07 -- accel/accel.sh@20 -- # val=0 00:07:37.542 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.542 20:14:07 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.542 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.542 20:14:07 -- accel/accel.sh@20 -- # val= 00:07:37.542 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.542 20:14:07 -- accel/accel.sh@20 -- # val=software 00:07:37.542 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.542 20:14:07 -- accel/accel.sh@22 -- # accel_module=software 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.542 20:14:07 -- accel/accel.sh@20 -- # val=32 00:07:37.542 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.542 20:14:07 -- accel/accel.sh@20 -- # val=32 00:07:37.542 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.542 20:14:07 -- accel/accel.sh@20 -- # val=1 00:07:37.542 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.542 20:14:07 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.542 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.542 20:14:07 -- accel/accel.sh@20 -- # val=Yes 00:07:37.542 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.542 20:14:07 -- accel/accel.sh@20 -- # val= 00:07:37.542 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:37.542 20:14:07 -- accel/accel.sh@20 -- # val= 00:07:37.542 20:14:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # IFS=: 00:07:37.542 20:14:07 -- accel/accel.sh@19 -- # read -r var val 00:07:39.458 20:14:09 -- accel/accel.sh@20 -- # val= 00:07:39.458 20:14:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.458 20:14:09 -- accel/accel.sh@19 -- # IFS=: 00:07:39.458 20:14:09 -- accel/accel.sh@19 -- # read -r var val 00:07:39.458 20:14:09 -- accel/accel.sh@20 -- # val= 00:07:39.458 20:14:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.458 20:14:09 -- accel/accel.sh@19 -- # IFS=: 00:07:39.458 20:14:09 -- accel/accel.sh@19 -- # read -r var val 00:07:39.458 20:14:09 -- accel/accel.sh@20 -- # val= 00:07:39.458 20:14:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.458 20:14:09 -- accel/accel.sh@19 -- # IFS=: 00:07:39.458 20:14:09 -- accel/accel.sh@19 -- # read -r var val 00:07:39.458 20:14:09 -- accel/accel.sh@20 -- # val= 00:07:39.458 20:14:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.458 20:14:09 -- accel/accel.sh@19 -- # IFS=: 00:07:39.458 20:14:09 -- accel/accel.sh@19 -- # read -r var val 00:07:39.458 20:14:09 -- accel/accel.sh@20 -- # val= 00:07:39.458 20:14:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.458 20:14:09 -- accel/accel.sh@19 -- # IFS=: 00:07:39.458 20:14:09 -- accel/accel.sh@19 -- # read -r var val 00:07:39.458 20:14:09 -- accel/accel.sh@20 -- # val= 00:07:39.458 20:14:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.458 20:14:09 -- accel/accel.sh@19 -- # IFS=: 00:07:39.458 20:14:09 -- accel/accel.sh@19 -- # read -r var val 00:07:39.458 20:14:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:39.458 20:14:09 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:39.458 20:14:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.458 00:07:39.458 real 0m2.993s 00:07:39.458 user 0m2.655s 00:07:39.458 sys 0m0.249s 00:07:39.458 ************************************ 00:07:39.458 END TEST accel_crc32c_C2 00:07:39.458 ************************************ 00:07:39.458 20:14:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:39.458 20:14:09 -- common/autotest_common.sh@10 -- # set +x 00:07:39.717 20:14:09 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:39.717 20:14:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:39.717 20:14:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.717 20:14:09 -- common/autotest_common.sh@10 -- # set +x 00:07:39.717 ************************************ 00:07:39.717 START TEST accel_copy 00:07:39.717 ************************************ 00:07:39.717 20:14:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:07:39.717 20:14:09 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.717 20:14:09 -- accel/accel.sh@17 -- # local accel_module 00:07:39.717 20:14:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:39.717 20:14:09 -- accel/accel.sh@19 -- # IFS=: 00:07:39.717 20:14:09 -- accel/accel.sh@19 -- # read -r var val 00:07:39.717 20:14:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:39.717 20:14:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.717 20:14:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.717 20:14:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.717 20:14:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.717 20:14:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.717 20:14:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.717 20:14:09 -- accel/accel.sh@40 -- # local IFS=, 00:07:39.717 20:14:09 -- accel/accel.sh@41 -- # jq -r . 00:07:39.717 [2024-04-24 20:14:09.893815] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:39.717 [2024-04-24 20:14:09.893959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64995 ] 00:07:39.975 [2024-04-24 20:14:10.065663] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.232 [2024-04-24 20:14:10.382260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val= 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val= 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val=0x1 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val= 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val= 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val=copy 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@23 -- # accel_opc=copy 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val= 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val=software 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@22 -- # accel_module=software 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val=32 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val=32 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val=1 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val=Yes 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val= 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:40.491 20:14:10 -- accel/accel.sh@20 -- # val= 00:07:40.491 20:14:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # IFS=: 00:07:40.491 20:14:10 -- accel/accel.sh@19 -- # read -r var val 00:07:43.023 20:14:12 -- accel/accel.sh@20 -- # val= 00:07:43.023 20:14:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.023 20:14:12 -- accel/accel.sh@19 -- # IFS=: 00:07:43.023 20:14:12 -- accel/accel.sh@19 -- # read -r var val 00:07:43.023 20:14:12 -- accel/accel.sh@20 -- # val= 00:07:43.023 20:14:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.023 20:14:12 -- accel/accel.sh@19 -- # IFS=: 00:07:43.023 20:14:12 -- accel/accel.sh@19 -- # read -r var val 00:07:43.023 20:14:12 -- accel/accel.sh@20 -- # val= 00:07:43.023 20:14:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.023 20:14:12 -- accel/accel.sh@19 -- # IFS=: 00:07:43.023 20:14:12 -- accel/accel.sh@19 -- # read -r var val 00:07:43.023 20:14:12 -- accel/accel.sh@20 -- # val= 00:07:43.023 20:14:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.023 20:14:12 -- accel/accel.sh@19 -- # IFS=: 00:07:43.023 20:14:12 -- accel/accel.sh@19 -- # read -r var val 00:07:43.023 20:14:12 -- accel/accel.sh@20 -- # val= 00:07:43.023 20:14:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.023 20:14:12 -- accel/accel.sh@19 -- # IFS=: 00:07:43.023 20:14:12 -- accel/accel.sh@19 -- # read -r var val 00:07:43.023 20:14:12 -- accel/accel.sh@20 -- # val= 00:07:43.023 20:14:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.023 20:14:12 -- accel/accel.sh@19 -- # IFS=: 00:07:43.023 20:14:12 -- accel/accel.sh@19 -- # read -r var val 00:07:43.023 20:14:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.023 ************************************ 00:07:43.023 END TEST accel_copy 00:07:43.023 ************************************ 00:07:43.023 20:14:12 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:43.023 20:14:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.023 00:07:43.023 real 0m3.024s 00:07:43.023 user 0m2.683s 00:07:43.023 sys 0m0.241s 00:07:43.023 20:14:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.023 20:14:12 -- common/autotest_common.sh@10 -- # set +x 00:07:43.023 20:14:12 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.023 20:14:12 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:43.023 20:14:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.023 20:14:12 -- common/autotest_common.sh@10 -- # set +x 00:07:43.023 ************************************ 00:07:43.023 START TEST accel_fill 00:07:43.023 ************************************ 00:07:43.023 20:14:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.023 20:14:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.023 20:14:13 -- accel/accel.sh@17 -- # local accel_module 00:07:43.023 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.023 20:14:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.023 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.023 20:14:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:43.024 20:14:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.024 20:14:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.024 20:14:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.024 20:14:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.024 20:14:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.024 20:14:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.024 20:14:13 -- accel/accel.sh@40 -- # local IFS=, 00:07:43.024 20:14:13 -- accel/accel.sh@41 -- # jq -r . 00:07:43.024 [2024-04-24 20:14:13.063949] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:43.024 [2024-04-24 20:14:13.064372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65051 ] 00:07:43.283 [2024-04-24 20:14:13.257385] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.541 [2024-04-24 20:14:13.585427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val= 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val= 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val=0x1 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val= 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val= 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val=fill 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@23 -- # accel_opc=fill 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val=0x80 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val= 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val=software 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@22 -- # accel_module=software 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val=64 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val=64 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val=1 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val=Yes 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val= 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:43.800 20:14:13 -- accel/accel.sh@20 -- # val= 00:07:43.800 20:14:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # IFS=: 00:07:43.800 20:14:13 -- accel/accel.sh@19 -- # read -r var val 00:07:45.708 20:14:15 -- accel/accel.sh@20 -- # val= 00:07:45.708 20:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.708 20:14:15 -- accel/accel.sh@19 -- # IFS=: 00:07:45.708 20:14:15 -- accel/accel.sh@19 -- # read -r var val 00:07:45.708 20:14:15 -- accel/accel.sh@20 -- # val= 00:07:45.708 20:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.708 20:14:15 -- accel/accel.sh@19 -- # IFS=: 00:07:45.708 20:14:15 -- accel/accel.sh@19 -- # read -r var val 00:07:45.708 20:14:15 -- accel/accel.sh@20 -- # val= 00:07:45.708 20:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.708 20:14:15 -- accel/accel.sh@19 -- # IFS=: 00:07:45.708 20:14:15 -- accel/accel.sh@19 -- # read -r var val 00:07:45.708 20:14:15 -- accel/accel.sh@20 -- # val= 00:07:45.708 20:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.708 20:14:15 -- accel/accel.sh@19 -- # IFS=: 00:07:45.708 20:14:15 -- accel/accel.sh@19 -- # read -r var val 00:07:45.708 20:14:15 -- accel/accel.sh@20 -- # val= 00:07:45.708 20:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.708 20:14:15 -- accel/accel.sh@19 -- # IFS=: 00:07:45.708 20:14:15 -- accel/accel.sh@19 -- # read -r var val 00:07:45.708 20:14:15 -- accel/accel.sh@20 -- # val= 00:07:45.708 20:14:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.708 20:14:15 -- accel/accel.sh@19 -- # IFS=: 00:07:45.708 20:14:15 -- accel/accel.sh@19 -- # read -r var val 00:07:45.967 20:14:15 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.967 20:14:15 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:45.967 20:14:15 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.967 00:07:45.967 real 0m2.941s 00:07:45.967 user 0m2.581s 00:07:45.967 sys 0m0.263s 00:07:45.967 20:14:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:45.967 20:14:15 -- common/autotest_common.sh@10 -- # set +x 00:07:45.967 ************************************ 00:07:45.967 END TEST accel_fill 00:07:45.967 ************************************ 00:07:45.967 20:14:16 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:45.968 20:14:16 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:45.968 20:14:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.968 20:14:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.968 ************************************ 00:07:45.968 START TEST accel_copy_crc32c 00:07:45.968 ************************************ 00:07:45.968 20:14:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:07:45.968 20:14:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.968 20:14:16 -- accel/accel.sh@17 -- # local accel_module 00:07:45.968 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:45.968 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:45.968 20:14:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:45.968 20:14:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:45.968 20:14:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.968 20:14:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.968 20:14:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.968 20:14:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.968 20:14:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.968 20:14:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.968 20:14:16 -- accel/accel.sh@40 -- # local IFS=, 00:07:45.968 20:14:16 -- accel/accel.sh@41 -- # jq -r . 00:07:45.968 [2024-04-24 20:14:16.163554] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:45.968 [2024-04-24 20:14:16.163667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65107 ] 00:07:46.226 [2024-04-24 20:14:16.332843] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.485 [2024-04-24 20:14:16.610171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.744 20:14:16 -- accel/accel.sh@20 -- # val= 00:07:46.744 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 20:14:16 -- accel/accel.sh@20 -- # val= 00:07:46.744 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 20:14:16 -- accel/accel.sh@20 -- # val=0x1 00:07:46.744 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 20:14:16 -- accel/accel.sh@20 -- # val= 00:07:46.744 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 20:14:16 -- accel/accel.sh@20 -- # val= 00:07:46.744 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 20:14:16 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:46.744 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 20:14:16 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 20:14:16 -- accel/accel.sh@20 -- # val=0 00:07:46.744 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 20:14:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.744 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 20:14:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.744 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 20:14:16 -- accel/accel.sh@20 -- # val= 00:07:46.744 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 20:14:16 -- accel/accel.sh@20 -- # val=software 00:07:46.744 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 20:14:16 -- accel/accel.sh@22 -- # accel_module=software 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 20:14:16 -- accel/accel.sh@20 -- # val=32 00:07:46.744 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.744 20:14:16 -- accel/accel.sh@20 -- # val=32 00:07:46.744 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.744 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.745 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.745 20:14:16 -- accel/accel.sh@20 -- # val=1 00:07:46.745 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.745 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.745 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.745 20:14:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.745 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.745 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.745 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.745 20:14:16 -- accel/accel.sh@20 -- # val=Yes 00:07:46.745 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.745 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.745 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.745 20:14:16 -- accel/accel.sh@20 -- # val= 00:07:46.745 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.745 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.745 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:46.745 20:14:16 -- accel/accel.sh@20 -- # val= 00:07:46.745 20:14:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:46.745 20:14:16 -- accel/accel.sh@19 -- # IFS=: 00:07:46.745 20:14:16 -- accel/accel.sh@19 -- # read -r var val 00:07:49.280 20:14:18 -- accel/accel.sh@20 -- # val= 00:07:49.280 20:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.280 20:14:18 -- accel/accel.sh@19 -- # IFS=: 00:07:49.280 20:14:18 -- accel/accel.sh@19 -- # read -r var val 00:07:49.280 20:14:18 -- accel/accel.sh@20 -- # val= 00:07:49.280 20:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.280 20:14:18 -- accel/accel.sh@19 -- # IFS=: 00:07:49.280 20:14:18 -- accel/accel.sh@19 -- # read -r var val 00:07:49.280 20:14:18 -- accel/accel.sh@20 -- # val= 00:07:49.280 20:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.280 20:14:18 -- accel/accel.sh@19 -- # IFS=: 00:07:49.280 20:14:18 -- accel/accel.sh@19 -- # read -r var val 00:07:49.280 20:14:18 -- accel/accel.sh@20 -- # val= 00:07:49.280 20:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.280 20:14:18 -- accel/accel.sh@19 -- # IFS=: 00:07:49.280 20:14:18 -- accel/accel.sh@19 -- # read -r var val 00:07:49.280 20:14:18 -- accel/accel.sh@20 -- # val= 00:07:49.280 20:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.280 20:14:18 -- accel/accel.sh@19 -- # IFS=: 00:07:49.280 20:14:18 -- accel/accel.sh@19 -- # read -r var val 00:07:49.280 20:14:18 -- accel/accel.sh@20 -- # val= 00:07:49.280 20:14:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.280 20:14:18 -- accel/accel.sh@19 -- # IFS=: 00:07:49.280 20:14:18 -- accel/accel.sh@19 -- # read -r var val 00:07:49.280 20:14:18 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.280 20:14:18 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:49.280 20:14:18 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.280 00:07:49.280 real 0m2.877s 00:07:49.280 user 0m2.536s 00:07:49.280 sys 0m0.249s 00:07:49.280 20:14:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:49.280 20:14:18 -- common/autotest_common.sh@10 -- # set +x 00:07:49.280 ************************************ 00:07:49.280 END TEST accel_copy_crc32c 00:07:49.280 ************************************ 00:07:49.280 20:14:19 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:49.280 20:14:19 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:49.280 20:14:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.280 20:14:19 -- common/autotest_common.sh@10 -- # set +x 00:07:49.280 ************************************ 00:07:49.280 START TEST accel_copy_crc32c_C2 00:07:49.280 ************************************ 00:07:49.280 20:14:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:49.280 20:14:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.280 20:14:19 -- accel/accel.sh@17 -- # local accel_module 00:07:49.280 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.280 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.280 20:14:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:49.280 20:14:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.280 20:14:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:49.280 20:14:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.280 20:14:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.280 20:14:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.280 20:14:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.280 20:14:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.280 20:14:19 -- accel/accel.sh@40 -- # local IFS=, 00:07:49.280 20:14:19 -- accel/accel.sh@41 -- # jq -r . 00:07:49.280 [2024-04-24 20:14:19.201286] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:49.280 [2024-04-24 20:14:19.201399] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65163 ] 00:07:49.280 [2024-04-24 20:14:19.373517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.540 [2024-04-24 20:14:19.613900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val= 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val= 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val=0x1 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val= 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val= 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val=0 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val= 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val=software 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@22 -- # accel_module=software 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val=32 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val=32 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val=1 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val=Yes 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val= 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:49.800 20:14:19 -- accel/accel.sh@20 -- # val= 00:07:49.800 20:14:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # IFS=: 00:07:49.800 20:14:19 -- accel/accel.sh@19 -- # read -r var val 00:07:51.717 20:14:21 -- accel/accel.sh@20 -- # val= 00:07:51.717 20:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.717 20:14:21 -- accel/accel.sh@19 -- # IFS=: 00:07:51.717 20:14:21 -- accel/accel.sh@19 -- # read -r var val 00:07:51.717 20:14:21 -- accel/accel.sh@20 -- # val= 00:07:51.717 20:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.717 20:14:21 -- accel/accel.sh@19 -- # IFS=: 00:07:51.717 20:14:21 -- accel/accel.sh@19 -- # read -r var val 00:07:51.717 20:14:21 -- accel/accel.sh@20 -- # val= 00:07:51.717 20:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.717 20:14:21 -- accel/accel.sh@19 -- # IFS=: 00:07:51.717 20:14:21 -- accel/accel.sh@19 -- # read -r var val 00:07:51.717 20:14:21 -- accel/accel.sh@20 -- # val= 00:07:51.717 20:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.717 20:14:21 -- accel/accel.sh@19 -- # IFS=: 00:07:51.717 20:14:21 -- accel/accel.sh@19 -- # read -r var val 00:07:51.717 20:14:21 -- accel/accel.sh@20 -- # val= 00:07:51.717 20:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.717 20:14:21 -- accel/accel.sh@19 -- # IFS=: 00:07:51.717 20:14:21 -- accel/accel.sh@19 -- # read -r var val 00:07:51.717 20:14:21 -- accel/accel.sh@20 -- # val= 00:07:51.717 20:14:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:51.717 20:14:21 -- accel/accel.sh@19 -- # IFS=: 00:07:51.717 20:14:21 -- accel/accel.sh@19 -- # read -r var val 00:07:51.717 20:14:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.717 20:14:21 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:51.717 20:14:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.717 00:07:51.717 real 0m2.688s 00:07:51.717 user 0m2.422s 00:07:51.717 sys 0m0.181s 00:07:51.717 20:14:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:51.717 ************************************ 00:07:51.717 END TEST accel_copy_crc32c_C2 00:07:51.717 ************************************ 00:07:51.717 20:14:21 -- common/autotest_common.sh@10 -- # set +x 00:07:51.717 20:14:21 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:51.717 20:14:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:51.717 20:14:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.717 20:14:21 -- common/autotest_common.sh@10 -- # set +x 00:07:51.980 ************************************ 00:07:51.980 START TEST accel_dualcast 00:07:51.980 ************************************ 00:07:51.980 20:14:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:07:51.980 20:14:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.980 20:14:21 -- accel/accel.sh@17 -- # local accel_module 00:07:51.980 20:14:21 -- accel/accel.sh@19 -- # IFS=: 00:07:51.980 20:14:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:51.980 20:14:21 -- accel/accel.sh@19 -- # read -r var val 00:07:51.980 20:14:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:51.980 20:14:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.980 20:14:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.980 20:14:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.980 20:14:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.980 20:14:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.980 20:14:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.980 20:14:21 -- accel/accel.sh@40 -- # local IFS=, 00:07:51.980 20:14:21 -- accel/accel.sh@41 -- # jq -r . 00:07:51.980 [2024-04-24 20:14:22.040364] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:51.980 [2024-04-24 20:14:22.040531] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65219 ] 00:07:52.276 [2024-04-24 20:14:22.214048] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.277 [2024-04-24 20:14:22.463716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val= 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val= 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val=0x1 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val= 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val= 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val=dualcast 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val= 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val=software 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@22 -- # accel_module=software 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val=32 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val=32 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val=1 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val=Yes 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val= 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:52.539 20:14:22 -- accel/accel.sh@20 -- # val= 00:07:52.539 20:14:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # IFS=: 00:07:52.539 20:14:22 -- accel/accel.sh@19 -- # read -r var val 00:07:54.456 20:14:24 -- accel/accel.sh@20 -- # val= 00:07:54.456 20:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.456 20:14:24 -- accel/accel.sh@19 -- # IFS=: 00:07:54.456 20:14:24 -- accel/accel.sh@19 -- # read -r var val 00:07:54.456 20:14:24 -- accel/accel.sh@20 -- # val= 00:07:54.456 20:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.456 20:14:24 -- accel/accel.sh@19 -- # IFS=: 00:07:54.456 20:14:24 -- accel/accel.sh@19 -- # read -r var val 00:07:54.715 20:14:24 -- accel/accel.sh@20 -- # val= 00:07:54.716 20:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.716 20:14:24 -- accel/accel.sh@19 -- # IFS=: 00:07:54.716 20:14:24 -- accel/accel.sh@19 -- # read -r var val 00:07:54.716 20:14:24 -- accel/accel.sh@20 -- # val= 00:07:54.716 20:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.716 20:14:24 -- accel/accel.sh@19 -- # IFS=: 00:07:54.716 20:14:24 -- accel/accel.sh@19 -- # read -r var val 00:07:54.716 20:14:24 -- accel/accel.sh@20 -- # val= 00:07:54.716 20:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.716 20:14:24 -- accel/accel.sh@19 -- # IFS=: 00:07:54.716 20:14:24 -- accel/accel.sh@19 -- # read -r var val 00:07:54.716 20:14:24 -- accel/accel.sh@20 -- # val= 00:07:54.716 20:14:24 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.716 20:14:24 -- accel/accel.sh@19 -- # IFS=: 00:07:54.716 20:14:24 -- accel/accel.sh@19 -- # read -r var val 00:07:54.716 20:14:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:54.716 20:14:24 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:54.716 20:14:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.716 00:07:54.716 real 0m2.725s 00:07:54.716 user 0m2.448s 00:07:54.716 sys 0m0.179s 00:07:54.716 20:14:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:54.716 20:14:24 -- common/autotest_common.sh@10 -- # set +x 00:07:54.716 ************************************ 00:07:54.716 END TEST accel_dualcast 00:07:54.716 ************************************ 00:07:54.716 20:14:24 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:54.716 20:14:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:54.716 20:14:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:54.716 20:14:24 -- common/autotest_common.sh@10 -- # set +x 00:07:54.716 ************************************ 00:07:54.716 START TEST accel_compare 00:07:54.716 ************************************ 00:07:54.716 20:14:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:07:54.716 20:14:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.716 20:14:24 -- accel/accel.sh@17 -- # local accel_module 00:07:54.716 20:14:24 -- accel/accel.sh@19 -- # IFS=: 00:07:54.716 20:14:24 -- accel/accel.sh@19 -- # read -r var val 00:07:54.716 20:14:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:54.716 20:14:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:54.716 20:14:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.716 20:14:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:54.716 20:14:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:54.716 20:14:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.716 20:14:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.716 20:14:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:54.716 20:14:24 -- accel/accel.sh@40 -- # local IFS=, 00:07:54.716 20:14:24 -- accel/accel.sh@41 -- # jq -r . 00:07:54.716 [2024-04-24 20:14:24.943665] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:54.716 [2024-04-24 20:14:24.943799] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65271 ] 00:07:54.975 [2024-04-24 20:14:25.120308] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.235 [2024-04-24 20:14:25.358036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.495 20:14:25 -- accel/accel.sh@20 -- # val= 00:07:55.495 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.495 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.495 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.495 20:14:25 -- accel/accel.sh@20 -- # val= 00:07:55.495 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.495 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.495 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.495 20:14:25 -- accel/accel.sh@20 -- # val=0x1 00:07:55.495 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.495 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.495 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.495 20:14:25 -- accel/accel.sh@20 -- # val= 00:07:55.495 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.496 20:14:25 -- accel/accel.sh@20 -- # val= 00:07:55.496 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.496 20:14:25 -- accel/accel.sh@20 -- # val=compare 00:07:55.496 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.496 20:14:25 -- accel/accel.sh@23 -- # accel_opc=compare 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.496 20:14:25 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:55.496 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.496 20:14:25 -- accel/accel.sh@20 -- # val= 00:07:55.496 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.496 20:14:25 -- accel/accel.sh@20 -- # val=software 00:07:55.496 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.496 20:14:25 -- accel/accel.sh@22 -- # accel_module=software 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.496 20:14:25 -- accel/accel.sh@20 -- # val=32 00:07:55.496 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.496 20:14:25 -- accel/accel.sh@20 -- # val=32 00:07:55.496 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.496 20:14:25 -- accel/accel.sh@20 -- # val=1 00:07:55.496 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.496 20:14:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.496 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.496 20:14:25 -- accel/accel.sh@20 -- # val=Yes 00:07:55.496 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.496 20:14:25 -- accel/accel.sh@20 -- # val= 00:07:55.496 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:55.496 20:14:25 -- accel/accel.sh@20 -- # val= 00:07:55.496 20:14:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # IFS=: 00:07:55.496 20:14:25 -- accel/accel.sh@19 -- # read -r var val 00:07:57.404 20:14:27 -- accel/accel.sh@20 -- # val= 00:07:57.404 20:14:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.404 20:14:27 -- accel/accel.sh@19 -- # IFS=: 00:07:57.404 20:14:27 -- accel/accel.sh@19 -- # read -r var val 00:07:57.404 20:14:27 -- accel/accel.sh@20 -- # val= 00:07:57.404 20:14:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.404 20:14:27 -- accel/accel.sh@19 -- # IFS=: 00:07:57.404 20:14:27 -- accel/accel.sh@19 -- # read -r var val 00:07:57.404 20:14:27 -- accel/accel.sh@20 -- # val= 00:07:57.404 20:14:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.404 20:14:27 -- accel/accel.sh@19 -- # IFS=: 00:07:57.404 20:14:27 -- accel/accel.sh@19 -- # read -r var val 00:07:57.404 20:14:27 -- accel/accel.sh@20 -- # val= 00:07:57.404 20:14:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.404 20:14:27 -- accel/accel.sh@19 -- # IFS=: 00:07:57.404 20:14:27 -- accel/accel.sh@19 -- # read -r var val 00:07:57.404 20:14:27 -- accel/accel.sh@20 -- # val= 00:07:57.404 20:14:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.404 20:14:27 -- accel/accel.sh@19 -- # IFS=: 00:07:57.404 20:14:27 -- accel/accel.sh@19 -- # read -r var val 00:07:57.404 20:14:27 -- accel/accel.sh@20 -- # val= 00:07:57.404 20:14:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:57.404 20:14:27 -- accel/accel.sh@19 -- # IFS=: 00:07:57.404 20:14:27 -- accel/accel.sh@19 -- # read -r var val 00:07:57.404 20:14:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:57.404 20:14:27 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:57.404 20:14:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.404 00:07:57.404 real 0m2.701s 00:07:57.404 user 0m2.414s 00:07:57.404 sys 0m0.196s 00:07:57.404 20:14:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:57.404 20:14:27 -- common/autotest_common.sh@10 -- # set +x 00:07:57.404 ************************************ 00:07:57.404 END TEST accel_compare 00:07:57.404 ************************************ 00:07:57.404 20:14:27 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:57.404 20:14:27 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:57.404 20:14:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:57.404 20:14:27 -- common/autotest_common.sh@10 -- # set +x 00:07:57.663 ************************************ 00:07:57.663 START TEST accel_xor 00:07:57.663 ************************************ 00:07:57.663 20:14:27 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:07:57.663 20:14:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:57.663 20:14:27 -- accel/accel.sh@17 -- # local accel_module 00:07:57.663 20:14:27 -- accel/accel.sh@19 -- # IFS=: 00:07:57.663 20:14:27 -- accel/accel.sh@19 -- # read -r var val 00:07:57.663 20:14:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:57.663 20:14:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:57.663 20:14:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:57.663 20:14:27 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.663 20:14:27 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:57.663 20:14:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.663 20:14:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.663 20:14:27 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.663 20:14:27 -- accel/accel.sh@40 -- # local IFS=, 00:07:57.663 20:14:27 -- accel/accel.sh@41 -- # jq -r . 00:07:57.663 [2024-04-24 20:14:27.788652] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:07:57.663 [2024-04-24 20:14:27.788888] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65321 ] 00:07:57.924 [2024-04-24 20:14:27.959115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.183 [2024-04-24 20:14:28.198894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val= 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val= 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val=0x1 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val= 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val= 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val=xor 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val=2 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val= 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val=software 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@22 -- # accel_module=software 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val=32 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val=32 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val=1 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val=Yes 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val= 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:07:58.443 20:14:28 -- accel/accel.sh@20 -- # val= 00:07:58.443 20:14:28 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # IFS=: 00:07:58.443 20:14:28 -- accel/accel.sh@19 -- # read -r var val 00:08:00.403 20:14:30 -- accel/accel.sh@20 -- # val= 00:08:00.403 20:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.403 20:14:30 -- accel/accel.sh@19 -- # IFS=: 00:08:00.403 20:14:30 -- accel/accel.sh@19 -- # read -r var val 00:08:00.403 20:14:30 -- accel/accel.sh@20 -- # val= 00:08:00.403 20:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.403 20:14:30 -- accel/accel.sh@19 -- # IFS=: 00:08:00.403 20:14:30 -- accel/accel.sh@19 -- # read -r var val 00:08:00.403 20:14:30 -- accel/accel.sh@20 -- # val= 00:08:00.403 20:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.403 20:14:30 -- accel/accel.sh@19 -- # IFS=: 00:08:00.403 20:14:30 -- accel/accel.sh@19 -- # read -r var val 00:08:00.403 20:14:30 -- accel/accel.sh@20 -- # val= 00:08:00.403 20:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.403 20:14:30 -- accel/accel.sh@19 -- # IFS=: 00:08:00.403 20:14:30 -- accel/accel.sh@19 -- # read -r var val 00:08:00.403 20:14:30 -- accel/accel.sh@20 -- # val= 00:08:00.403 20:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.403 20:14:30 -- accel/accel.sh@19 -- # IFS=: 00:08:00.403 20:14:30 -- accel/accel.sh@19 -- # read -r var val 00:08:00.403 20:14:30 -- accel/accel.sh@20 -- # val= 00:08:00.403 20:14:30 -- accel/accel.sh@21 -- # case "$var" in 00:08:00.403 20:14:30 -- accel/accel.sh@19 -- # IFS=: 00:08:00.403 20:14:30 -- accel/accel.sh@19 -- # read -r var val 00:08:00.403 20:14:30 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:00.404 20:14:30 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:00.404 20:14:30 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:00.404 00:08:00.404 real 0m2.701s 00:08:00.404 user 0m2.427s 00:08:00.404 sys 0m0.177s 00:08:00.404 ************************************ 00:08:00.404 END TEST accel_xor 00:08:00.404 ************************************ 00:08:00.404 20:14:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:00.404 20:14:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.404 20:14:30 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:00.404 20:14:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:00.404 20:14:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:00.404 20:14:30 -- common/autotest_common.sh@10 -- # set +x 00:08:00.404 ************************************ 00:08:00.404 START TEST accel_xor 00:08:00.404 ************************************ 00:08:00.404 20:14:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:08:00.404 20:14:30 -- accel/accel.sh@16 -- # local accel_opc 00:08:00.404 20:14:30 -- accel/accel.sh@17 -- # local accel_module 00:08:00.404 20:14:30 -- accel/accel.sh@19 -- # IFS=: 00:08:00.404 20:14:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:00.404 20:14:30 -- accel/accel.sh@19 -- # read -r var val 00:08:00.404 20:14:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:00.404 20:14:30 -- accel/accel.sh@12 -- # build_accel_config 00:08:00.404 20:14:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:00.404 20:14:30 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:00.404 20:14:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.404 20:14:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.404 20:14:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:00.404 20:14:30 -- accel/accel.sh@40 -- # local IFS=, 00:08:00.404 20:14:30 -- accel/accel.sh@41 -- # jq -r . 00:08:00.663 [2024-04-24 20:14:30.645770] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:00.663 [2024-04-24 20:14:30.646036] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65379 ] 00:08:00.663 [2024-04-24 20:14:30.816499] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.922 [2024-04-24 20:14:31.055154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val= 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val= 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val=0x1 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val= 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val= 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val=xor 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@23 -- # accel_opc=xor 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val=3 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val= 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val=software 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@22 -- # accel_module=software 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val=32 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val=32 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val=1 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val=Yes 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val= 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:01.181 20:14:31 -- accel/accel.sh@20 -- # val= 00:08:01.181 20:14:31 -- accel/accel.sh@21 -- # case "$var" in 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # IFS=: 00:08:01.181 20:14:31 -- accel/accel.sh@19 -- # read -r var val 00:08:03.715 20:14:33 -- accel/accel.sh@20 -- # val= 00:08:03.715 20:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.715 20:14:33 -- accel/accel.sh@19 -- # IFS=: 00:08:03.715 20:14:33 -- accel/accel.sh@19 -- # read -r var val 00:08:03.715 20:14:33 -- accel/accel.sh@20 -- # val= 00:08:03.715 20:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.715 20:14:33 -- accel/accel.sh@19 -- # IFS=: 00:08:03.715 20:14:33 -- accel/accel.sh@19 -- # read -r var val 00:08:03.715 20:14:33 -- accel/accel.sh@20 -- # val= 00:08:03.715 20:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.715 20:14:33 -- accel/accel.sh@19 -- # IFS=: 00:08:03.715 20:14:33 -- accel/accel.sh@19 -- # read -r var val 00:08:03.715 20:14:33 -- accel/accel.sh@20 -- # val= 00:08:03.715 20:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.715 20:14:33 -- accel/accel.sh@19 -- # IFS=: 00:08:03.715 20:14:33 -- accel/accel.sh@19 -- # read -r var val 00:08:03.715 20:14:33 -- accel/accel.sh@20 -- # val= 00:08:03.715 20:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.715 20:14:33 -- accel/accel.sh@19 -- # IFS=: 00:08:03.715 20:14:33 -- accel/accel.sh@19 -- # read -r var val 00:08:03.715 20:14:33 -- accel/accel.sh@20 -- # val= 00:08:03.715 20:14:33 -- accel/accel.sh@21 -- # case "$var" in 00:08:03.715 20:14:33 -- accel/accel.sh@19 -- # IFS=: 00:08:03.715 20:14:33 -- accel/accel.sh@19 -- # read -r var val 00:08:03.715 20:14:33 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:03.715 20:14:33 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:03.715 20:14:33 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:03.715 00:08:03.715 real 0m2.753s 00:08:03.715 user 0m2.472s 00:08:03.715 sys 0m0.186s 00:08:03.715 20:14:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:03.715 ************************************ 00:08:03.715 END TEST accel_xor 00:08:03.715 ************************************ 00:08:03.715 20:14:33 -- common/autotest_common.sh@10 -- # set +x 00:08:03.715 20:14:33 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:03.715 20:14:33 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:03.715 20:14:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:03.715 20:14:33 -- common/autotest_common.sh@10 -- # set +x 00:08:03.715 ************************************ 00:08:03.715 START TEST accel_dif_verify 00:08:03.715 ************************************ 00:08:03.715 20:14:33 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:08:03.715 20:14:33 -- accel/accel.sh@16 -- # local accel_opc 00:08:03.715 20:14:33 -- accel/accel.sh@17 -- # local accel_module 00:08:03.715 20:14:33 -- accel/accel.sh@19 -- # IFS=: 00:08:03.715 20:14:33 -- accel/accel.sh@19 -- # read -r var val 00:08:03.715 20:14:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:03.715 20:14:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:03.715 20:14:33 -- accel/accel.sh@12 -- # build_accel_config 00:08:03.715 20:14:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:03.715 20:14:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:03.715 20:14:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:03.715 20:14:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:03.715 20:14:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:03.715 20:14:33 -- accel/accel.sh@40 -- # local IFS=, 00:08:03.715 20:14:33 -- accel/accel.sh@41 -- # jq -r . 00:08:03.715 [2024-04-24 20:14:33.560374] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:03.715 [2024-04-24 20:14:33.560485] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65430 ] 00:08:03.715 [2024-04-24 20:14:33.731491] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.972 [2024-04-24 20:14:33.971952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val= 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val= 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val=0x1 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val= 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val= 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val=dif_verify 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val='512 bytes' 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val='8 bytes' 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val= 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val=software 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@22 -- # accel_module=software 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val=32 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val=32 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val=1 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val=No 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val= 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:04.230 20:14:34 -- accel/accel.sh@20 -- # val= 00:08:04.230 20:14:34 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # IFS=: 00:08:04.230 20:14:34 -- accel/accel.sh@19 -- # read -r var val 00:08:06.130 20:14:36 -- accel/accel.sh@20 -- # val= 00:08:06.130 20:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.130 20:14:36 -- accel/accel.sh@19 -- # IFS=: 00:08:06.130 20:14:36 -- accel/accel.sh@19 -- # read -r var val 00:08:06.130 20:14:36 -- accel/accel.sh@20 -- # val= 00:08:06.130 20:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.130 20:14:36 -- accel/accel.sh@19 -- # IFS=: 00:08:06.130 20:14:36 -- accel/accel.sh@19 -- # read -r var val 00:08:06.130 20:14:36 -- accel/accel.sh@20 -- # val= 00:08:06.130 20:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.130 20:14:36 -- accel/accel.sh@19 -- # IFS=: 00:08:06.130 20:14:36 -- accel/accel.sh@19 -- # read -r var val 00:08:06.130 20:14:36 -- accel/accel.sh@20 -- # val= 00:08:06.130 20:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.130 20:14:36 -- accel/accel.sh@19 -- # IFS=: 00:08:06.130 20:14:36 -- accel/accel.sh@19 -- # read -r var val 00:08:06.130 20:14:36 -- accel/accel.sh@20 -- # val= 00:08:06.130 20:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.130 20:14:36 -- accel/accel.sh@19 -- # IFS=: 00:08:06.130 20:14:36 -- accel/accel.sh@19 -- # read -r var val 00:08:06.130 20:14:36 -- accel/accel.sh@20 -- # val= 00:08:06.130 20:14:36 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.130 20:14:36 -- accel/accel.sh@19 -- # IFS=: 00:08:06.130 20:14:36 -- accel/accel.sh@19 -- # read -r var val 00:08:06.130 20:14:36 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:06.130 20:14:36 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:06.130 20:14:36 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.130 00:08:06.130 real 0m2.686s 00:08:06.130 user 0m0.025s 00:08:06.130 sys 0m0.004s 00:08:06.130 20:14:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:06.130 20:14:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.130 ************************************ 00:08:06.130 END TEST accel_dif_verify 00:08:06.130 ************************************ 00:08:06.130 20:14:36 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:06.130 20:14:36 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:06.130 20:14:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.130 20:14:36 -- common/autotest_common.sh@10 -- # set +x 00:08:06.130 ************************************ 00:08:06.130 START TEST accel_dif_generate 00:08:06.130 ************************************ 00:08:06.130 20:14:36 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:08:06.130 20:14:36 -- accel/accel.sh@16 -- # local accel_opc 00:08:06.130 20:14:36 -- accel/accel.sh@17 -- # local accel_module 00:08:06.130 20:14:36 -- accel/accel.sh@19 -- # IFS=: 00:08:06.130 20:14:36 -- accel/accel.sh@19 -- # read -r var val 00:08:06.130 20:14:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:06.130 20:14:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:06.130 20:14:36 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.130 20:14:36 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.130 20:14:36 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:06.130 20:14:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.130 20:14:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.130 20:14:36 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.130 20:14:36 -- accel/accel.sh@40 -- # local IFS=, 00:08:06.130 20:14:36 -- accel/accel.sh@41 -- # jq -r . 00:08:06.389 [2024-04-24 20:14:36.395700] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:06.389 [2024-04-24 20:14:36.395801] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65486 ] 00:08:06.389 [2024-04-24 20:14:36.566406] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.647 [2024-04-24 20:14:36.802005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val= 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val= 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val=0x1 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val= 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val= 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val=dif_generate 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val='512 bytes' 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val='8 bytes' 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val= 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val=software 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@22 -- # accel_module=software 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val=32 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val=32 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val=1 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val=No 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val= 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:06.907 20:14:37 -- accel/accel.sh@20 -- # val= 00:08:06.907 20:14:37 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # IFS=: 00:08:06.907 20:14:37 -- accel/accel.sh@19 -- # read -r var val 00:08:08.809 20:14:39 -- accel/accel.sh@20 -- # val= 00:08:08.809 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.809 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:08.809 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:08.809 20:14:39 -- accel/accel.sh@20 -- # val= 00:08:08.809 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.809 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:08.809 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:08.809 20:14:39 -- accel/accel.sh@20 -- # val= 00:08:08.809 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.809 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:08.809 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:08.809 20:14:39 -- accel/accel.sh@20 -- # val= 00:08:08.809 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.809 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:08.809 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:08.809 20:14:39 -- accel/accel.sh@20 -- # val= 00:08:08.809 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.809 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:08.809 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:08.809 20:14:39 -- accel/accel.sh@20 -- # val= 00:08:08.809 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:08.809 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:08.809 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:08.809 20:14:39 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:08.809 20:14:39 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:08.809 20:14:39 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:08.809 00:08:08.809 real 0m2.696s 00:08:08.809 user 0m2.419s 00:08:08.809 sys 0m0.182s 00:08:08.809 20:14:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.809 20:14:39 -- common/autotest_common.sh@10 -- # set +x 00:08:08.809 ************************************ 00:08:08.809 END TEST accel_dif_generate 00:08:08.809 ************************************ 00:08:09.068 20:14:39 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:09.068 20:14:39 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:09.068 20:14:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.068 20:14:39 -- common/autotest_common.sh@10 -- # set +x 00:08:09.068 ************************************ 00:08:09.068 START TEST accel_dif_generate_copy 00:08:09.068 ************************************ 00:08:09.068 20:14:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:08:09.068 20:14:39 -- accel/accel.sh@16 -- # local accel_opc 00:08:09.068 20:14:39 -- accel/accel.sh@17 -- # local accel_module 00:08:09.068 20:14:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:09.068 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.068 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.068 20:14:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:09.068 20:14:39 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.068 20:14:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:09.068 20:14:39 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:09.068 20:14:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.068 20:14:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.068 20:14:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:09.068 20:14:39 -- accel/accel.sh@40 -- # local IFS=, 00:08:09.068 20:14:39 -- accel/accel.sh@41 -- # jq -r . 00:08:09.068 [2024-04-24 20:14:39.239148] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:09.068 [2024-04-24 20:14:39.239309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65541 ] 00:08:09.326 [2024-04-24 20:14:39.428255] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.585 [2024-04-24 20:14:39.671646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val= 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val= 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val=0x1 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val= 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val= 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val= 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val=software 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@22 -- # accel_module=software 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val=32 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val=32 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val=1 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val=No 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val= 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:09.844 20:14:39 -- accel/accel.sh@20 -- # val= 00:08:09.844 20:14:39 -- accel/accel.sh@21 -- # case "$var" in 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # IFS=: 00:08:09.844 20:14:39 -- accel/accel.sh@19 -- # read -r var val 00:08:11.776 20:14:41 -- accel/accel.sh@20 -- # val= 00:08:11.776 20:14:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.776 20:14:41 -- accel/accel.sh@19 -- # IFS=: 00:08:11.776 20:14:41 -- accel/accel.sh@19 -- # read -r var val 00:08:11.776 20:14:41 -- accel/accel.sh@20 -- # val= 00:08:11.776 20:14:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.776 20:14:41 -- accel/accel.sh@19 -- # IFS=: 00:08:11.776 20:14:41 -- accel/accel.sh@19 -- # read -r var val 00:08:11.776 20:14:41 -- accel/accel.sh@20 -- # val= 00:08:11.776 20:14:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.776 20:14:41 -- accel/accel.sh@19 -- # IFS=: 00:08:11.776 20:14:41 -- accel/accel.sh@19 -- # read -r var val 00:08:11.776 20:14:41 -- accel/accel.sh@20 -- # val= 00:08:11.776 20:14:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.776 20:14:41 -- accel/accel.sh@19 -- # IFS=: 00:08:11.776 20:14:41 -- accel/accel.sh@19 -- # read -r var val 00:08:11.776 20:14:41 -- accel/accel.sh@20 -- # val= 00:08:11.776 20:14:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.776 20:14:41 -- accel/accel.sh@19 -- # IFS=: 00:08:11.776 20:14:41 -- accel/accel.sh@19 -- # read -r var val 00:08:11.776 20:14:41 -- accel/accel.sh@20 -- # val= 00:08:11.776 20:14:41 -- accel/accel.sh@21 -- # case "$var" in 00:08:11.776 20:14:41 -- accel/accel.sh@19 -- # IFS=: 00:08:11.776 20:14:41 -- accel/accel.sh@19 -- # read -r var val 00:08:11.776 20:14:41 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:11.776 20:14:41 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:11.776 20:14:41 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:11.776 00:08:11.776 real 0m2.726s 00:08:11.776 user 0m2.447s 00:08:11.776 sys 0m0.185s 00:08:11.776 20:14:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:11.776 20:14:41 -- common/autotest_common.sh@10 -- # set +x 00:08:11.776 ************************************ 00:08:11.776 END TEST accel_dif_generate_copy 00:08:11.776 ************************************ 00:08:11.776 20:14:41 -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:11.776 20:14:41 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:11.776 20:14:41 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:11.777 20:14:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:11.777 20:14:41 -- common/autotest_common.sh@10 -- # set +x 00:08:12.035 ************************************ 00:08:12.035 START TEST accel_comp 00:08:12.035 ************************************ 00:08:12.035 20:14:42 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.035 20:14:42 -- accel/accel.sh@16 -- # local accel_opc 00:08:12.035 20:14:42 -- accel/accel.sh@17 -- # local accel_module 00:08:12.035 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.035 20:14:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.035 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.035 20:14:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.035 20:14:42 -- accel/accel.sh@12 -- # build_accel_config 00:08:12.035 20:14:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:12.035 20:14:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:12.035 20:14:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.035 20:14:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.035 20:14:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:12.035 20:14:42 -- accel/accel.sh@40 -- # local IFS=, 00:08:12.035 20:14:42 -- accel/accel.sh@41 -- # jq -r . 00:08:12.035 [2024-04-24 20:14:42.133464] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:12.035 [2024-04-24 20:14:42.133586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65589 ] 00:08:12.294 [2024-04-24 20:14:42.305931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.552 [2024-04-24 20:14:42.547811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.811 20:14:42 -- accel/accel.sh@20 -- # val= 00:08:12.811 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.811 20:14:42 -- accel/accel.sh@20 -- # val= 00:08:12.811 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.811 20:14:42 -- accel/accel.sh@20 -- # val= 00:08:12.811 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.811 20:14:42 -- accel/accel.sh@20 -- # val=0x1 00:08:12.811 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.811 20:14:42 -- accel/accel.sh@20 -- # val= 00:08:12.811 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.811 20:14:42 -- accel/accel.sh@20 -- # val= 00:08:12.811 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.811 20:14:42 -- accel/accel.sh@20 -- # val=compress 00:08:12.811 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.811 20:14:42 -- accel/accel.sh@23 -- # accel_opc=compress 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.811 20:14:42 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:12.811 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.811 20:14:42 -- accel/accel.sh@20 -- # val= 00:08:12.811 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.811 20:14:42 -- accel/accel.sh@20 -- # val=software 00:08:12.811 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.811 20:14:42 -- accel/accel.sh@22 -- # accel_module=software 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.811 20:14:42 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:12.811 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.811 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.812 20:14:42 -- accel/accel.sh@20 -- # val=32 00:08:12.812 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.812 20:14:42 -- accel/accel.sh@20 -- # val=32 00:08:12.812 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.812 20:14:42 -- accel/accel.sh@20 -- # val=1 00:08:12.812 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.812 20:14:42 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:12.812 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.812 20:14:42 -- accel/accel.sh@20 -- # val=No 00:08:12.812 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.812 20:14:42 -- accel/accel.sh@20 -- # val= 00:08:12.812 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:12.812 20:14:42 -- accel/accel.sh@20 -- # val= 00:08:12.812 20:14:42 -- accel/accel.sh@21 -- # case "$var" in 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # IFS=: 00:08:12.812 20:14:42 -- accel/accel.sh@19 -- # read -r var val 00:08:14.717 20:14:44 -- accel/accel.sh@20 -- # val= 00:08:14.717 20:14:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.717 20:14:44 -- accel/accel.sh@19 -- # IFS=: 00:08:14.717 20:14:44 -- accel/accel.sh@19 -- # read -r var val 00:08:14.717 20:14:44 -- accel/accel.sh@20 -- # val= 00:08:14.717 20:14:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.717 20:14:44 -- accel/accel.sh@19 -- # IFS=: 00:08:14.717 20:14:44 -- accel/accel.sh@19 -- # read -r var val 00:08:14.717 20:14:44 -- accel/accel.sh@20 -- # val= 00:08:14.717 20:14:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.717 20:14:44 -- accel/accel.sh@19 -- # IFS=: 00:08:14.717 20:14:44 -- accel/accel.sh@19 -- # read -r var val 00:08:14.717 20:14:44 -- accel/accel.sh@20 -- # val= 00:08:14.717 20:14:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.717 20:14:44 -- accel/accel.sh@19 -- # IFS=: 00:08:14.717 20:14:44 -- accel/accel.sh@19 -- # read -r var val 00:08:14.717 20:14:44 -- accel/accel.sh@20 -- # val= 00:08:14.717 20:14:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.717 20:14:44 -- accel/accel.sh@19 -- # IFS=: 00:08:14.717 20:14:44 -- accel/accel.sh@19 -- # read -r var val 00:08:14.717 20:14:44 -- accel/accel.sh@20 -- # val= 00:08:14.717 20:14:44 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.717 20:14:44 -- accel/accel.sh@19 -- # IFS=: 00:08:14.717 20:14:44 -- accel/accel.sh@19 -- # read -r var val 00:08:14.717 20:14:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:14.717 20:14:44 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:14.717 ************************************ 00:08:14.717 END TEST accel_comp 00:08:14.717 ************************************ 00:08:14.717 20:14:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.717 00:08:14.717 real 0m2.715s 00:08:14.717 user 0m2.446s 00:08:14.717 sys 0m0.182s 00:08:14.717 20:14:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:14.717 20:14:44 -- common/autotest_common.sh@10 -- # set +x 00:08:14.717 20:14:44 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:14.717 20:14:44 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:14.717 20:14:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.717 20:14:44 -- common/autotest_common.sh@10 -- # set +x 00:08:14.717 ************************************ 00:08:14.717 START TEST accel_decomp 00:08:14.717 ************************************ 00:08:14.717 20:14:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:14.717 20:14:44 -- accel/accel.sh@16 -- # local accel_opc 00:08:14.717 20:14:44 -- accel/accel.sh@17 -- # local accel_module 00:08:14.717 20:14:44 -- accel/accel.sh@19 -- # IFS=: 00:08:14.717 20:14:44 -- accel/accel.sh@19 -- # read -r var val 00:08:14.717 20:14:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:14.717 20:14:44 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:08:14.717 20:14:44 -- accel/accel.sh@12 -- # build_accel_config 00:08:14.717 20:14:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:14.717 20:14:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:14.717 20:14:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.717 20:14:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.717 20:14:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:14.717 20:14:44 -- accel/accel.sh@40 -- # local IFS=, 00:08:14.717 20:14:44 -- accel/accel.sh@41 -- # jq -r . 00:08:14.977 [2024-04-24 20:14:44.997639] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:14.977 [2024-04-24 20:14:44.997785] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65646 ] 00:08:14.977 [2024-04-24 20:14:45.168019] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.237 [2024-04-24 20:14:45.402584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.497 20:14:45 -- accel/accel.sh@20 -- # val= 00:08:15.497 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.497 20:14:45 -- accel/accel.sh@20 -- # val= 00:08:15.497 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.497 20:14:45 -- accel/accel.sh@20 -- # val= 00:08:15.497 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.497 20:14:45 -- accel/accel.sh@20 -- # val=0x1 00:08:15.497 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.497 20:14:45 -- accel/accel.sh@20 -- # val= 00:08:15.497 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.497 20:14:45 -- accel/accel.sh@20 -- # val= 00:08:15.497 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.497 20:14:45 -- accel/accel.sh@20 -- # val=decompress 00:08:15.497 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.497 20:14:45 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.497 20:14:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:15.497 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.497 20:14:45 -- accel/accel.sh@20 -- # val= 00:08:15.497 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.497 20:14:45 -- accel/accel.sh@20 -- # val=software 00:08:15.497 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.497 20:14:45 -- accel/accel.sh@22 -- # accel_module=software 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.497 20:14:45 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:15.497 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.497 20:14:45 -- accel/accel.sh@20 -- # val=32 00:08:15.497 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.497 20:14:45 -- accel/accel.sh@20 -- # val=32 00:08:15.497 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.497 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.497 20:14:45 -- accel/accel.sh@20 -- # val=1 00:08:15.497 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.498 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.498 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.498 20:14:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:15.498 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.498 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.498 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.498 20:14:45 -- accel/accel.sh@20 -- # val=Yes 00:08:15.498 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.498 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.498 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.498 20:14:45 -- accel/accel.sh@20 -- # val= 00:08:15.498 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.498 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.498 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:15.498 20:14:45 -- accel/accel.sh@20 -- # val= 00:08:15.498 20:14:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:15.498 20:14:45 -- accel/accel.sh@19 -- # IFS=: 00:08:15.498 20:14:45 -- accel/accel.sh@19 -- # read -r var val 00:08:17.403 20:14:47 -- accel/accel.sh@20 -- # val= 00:08:17.403 20:14:47 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.403 20:14:47 -- accel/accel.sh@19 -- # IFS=: 00:08:17.403 20:14:47 -- accel/accel.sh@19 -- # read -r var val 00:08:17.403 20:14:47 -- accel/accel.sh@20 -- # val= 00:08:17.403 20:14:47 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.403 20:14:47 -- accel/accel.sh@19 -- # IFS=: 00:08:17.403 20:14:47 -- accel/accel.sh@19 -- # read -r var val 00:08:17.403 20:14:47 -- accel/accel.sh@20 -- # val= 00:08:17.403 20:14:47 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.403 20:14:47 -- accel/accel.sh@19 -- # IFS=: 00:08:17.403 20:14:47 -- accel/accel.sh@19 -- # read -r var val 00:08:17.403 20:14:47 -- accel/accel.sh@20 -- # val= 00:08:17.403 20:14:47 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.403 20:14:47 -- accel/accel.sh@19 -- # IFS=: 00:08:17.403 20:14:47 -- accel/accel.sh@19 -- # read -r var val 00:08:17.403 20:14:47 -- accel/accel.sh@20 -- # val= 00:08:17.403 20:14:47 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.403 20:14:47 -- accel/accel.sh@19 -- # IFS=: 00:08:17.403 20:14:47 -- accel/accel.sh@19 -- # read -r var val 00:08:17.403 20:14:47 -- accel/accel.sh@20 -- # val= 00:08:17.403 20:14:47 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.403 20:14:47 -- accel/accel.sh@19 -- # IFS=: 00:08:17.403 20:14:47 -- accel/accel.sh@19 -- # read -r var val 00:08:17.661 20:14:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.661 20:14:47 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:17.661 20:14:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.661 00:08:17.661 real 0m2.702s 00:08:17.661 user 0m2.422s 00:08:17.661 sys 0m0.189s 00:08:17.661 20:14:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:17.661 ************************************ 00:08:17.661 END TEST accel_decomp 00:08:17.661 ************************************ 00:08:17.661 20:14:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.661 20:14:47 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:17.661 20:14:47 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:17.661 20:14:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.661 20:14:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.661 ************************************ 00:08:17.661 START TEST accel_decmop_full 00:08:17.661 ************************************ 00:08:17.661 20:14:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:17.661 20:14:47 -- accel/accel.sh@16 -- # local accel_opc 00:08:17.661 20:14:47 -- accel/accel.sh@17 -- # local accel_module 00:08:17.661 20:14:47 -- accel/accel.sh@19 -- # IFS=: 00:08:17.661 20:14:47 -- accel/accel.sh@19 -- # read -r var val 00:08:17.661 20:14:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:17.661 20:14:47 -- accel/accel.sh@12 -- # build_accel_config 00:08:17.661 20:14:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:08:17.661 20:14:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.661 20:14:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:17.661 20:14:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.661 20:14:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.661 20:14:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.661 20:14:47 -- accel/accel.sh@40 -- # local IFS=, 00:08:17.661 20:14:47 -- accel/accel.sh@41 -- # jq -r . 00:08:17.661 [2024-04-24 20:14:47.844614] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:17.661 [2024-04-24 20:14:47.844731] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65697 ] 00:08:17.920 [2024-04-24 20:14:48.016347] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.178 [2024-04-24 20:14:48.248933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val= 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val= 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val= 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val=0x1 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val= 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val= 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val=decompress 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val= 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val=software 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@22 -- # accel_module=software 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val=32 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val=32 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val=1 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val=Yes 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val= 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:18.437 20:14:48 -- accel/accel.sh@20 -- # val= 00:08:18.437 20:14:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # IFS=: 00:08:18.437 20:14:48 -- accel/accel.sh@19 -- # read -r var val 00:08:20.341 20:14:50 -- accel/accel.sh@20 -- # val= 00:08:20.341 20:14:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.341 20:14:50 -- accel/accel.sh@19 -- # IFS=: 00:08:20.341 20:14:50 -- accel/accel.sh@19 -- # read -r var val 00:08:20.341 20:14:50 -- accel/accel.sh@20 -- # val= 00:08:20.341 20:14:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.341 20:14:50 -- accel/accel.sh@19 -- # IFS=: 00:08:20.341 20:14:50 -- accel/accel.sh@19 -- # read -r var val 00:08:20.341 20:14:50 -- accel/accel.sh@20 -- # val= 00:08:20.341 20:14:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.341 20:14:50 -- accel/accel.sh@19 -- # IFS=: 00:08:20.341 20:14:50 -- accel/accel.sh@19 -- # read -r var val 00:08:20.341 20:14:50 -- accel/accel.sh@20 -- # val= 00:08:20.341 20:14:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.341 20:14:50 -- accel/accel.sh@19 -- # IFS=: 00:08:20.341 20:14:50 -- accel/accel.sh@19 -- # read -r var val 00:08:20.341 20:14:50 -- accel/accel.sh@20 -- # val= 00:08:20.341 20:14:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.341 20:14:50 -- accel/accel.sh@19 -- # IFS=: 00:08:20.341 20:14:50 -- accel/accel.sh@19 -- # read -r var val 00:08:20.341 20:14:50 -- accel/accel.sh@20 -- # val= 00:08:20.341 20:14:50 -- accel/accel.sh@21 -- # case "$var" in 00:08:20.341 20:14:50 -- accel/accel.sh@19 -- # IFS=: 00:08:20.341 20:14:50 -- accel/accel.sh@19 -- # read -r var val 00:08:20.341 20:14:50 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:20.341 20:14:50 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:20.341 20:14:50 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:20.341 00:08:20.341 real 0m2.707s 00:08:20.341 user 0m2.434s 00:08:20.341 sys 0m0.185s 00:08:20.341 20:14:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:20.341 ************************************ 00:08:20.341 END TEST accel_decmop_full 00:08:20.341 ************************************ 00:08:20.341 20:14:50 -- common/autotest_common.sh@10 -- # set +x 00:08:20.341 20:14:50 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:20.341 20:14:50 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:20.341 20:14:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.341 20:14:50 -- common/autotest_common.sh@10 -- # set +x 00:08:20.601 ************************************ 00:08:20.601 START TEST accel_decomp_mcore 00:08:20.601 ************************************ 00:08:20.601 20:14:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:20.601 20:14:50 -- accel/accel.sh@16 -- # local accel_opc 00:08:20.601 20:14:50 -- accel/accel.sh@17 -- # local accel_module 00:08:20.601 20:14:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:20.601 20:14:50 -- accel/accel.sh@19 -- # IFS=: 00:08:20.601 20:14:50 -- accel/accel.sh@19 -- # read -r var val 00:08:20.601 20:14:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:08:20.601 20:14:50 -- accel/accel.sh@12 -- # build_accel_config 00:08:20.601 20:14:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:20.601 20:14:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:20.601 20:14:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.601 20:14:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.601 20:14:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:20.601 20:14:50 -- accel/accel.sh@40 -- # local IFS=, 00:08:20.601 20:14:50 -- accel/accel.sh@41 -- # jq -r . 00:08:20.601 [2024-04-24 20:14:50.702774] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:20.601 [2024-04-24 20:14:50.702887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65753 ] 00:08:20.861 [2024-04-24 20:14:50.874798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.120 [2024-04-24 20:14:51.121979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.120 [2024-04-24 20:14:51.122032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.120 [2024-04-24 20:14:51.122197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.120 [2024-04-24 20:14:51.122237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.379 20:14:51 -- accel/accel.sh@20 -- # val= 00:08:21.379 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.379 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.379 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.379 20:14:51 -- accel/accel.sh@20 -- # val= 00:08:21.379 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.379 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.379 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.379 20:14:51 -- accel/accel.sh@20 -- # val= 00:08:21.379 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val=0xf 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val= 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val= 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val=decompress 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val= 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val=software 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@22 -- # accel_module=software 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val=32 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val=32 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val=1 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val=Yes 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val= 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:21.380 20:14:51 -- accel/accel.sh@20 -- # val= 00:08:21.380 20:14:51 -- accel/accel.sh@21 -- # case "$var" in 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # IFS=: 00:08:21.380 20:14:51 -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 20:14:53 -- accel/accel.sh@20 -- # val= 00:08:23.284 20:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 20:14:53 -- accel/accel.sh@20 -- # val= 00:08:23.284 20:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 20:14:53 -- accel/accel.sh@20 -- # val= 00:08:23.284 20:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 20:14:53 -- accel/accel.sh@20 -- # val= 00:08:23.284 20:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 20:14:53 -- accel/accel.sh@20 -- # val= 00:08:23.284 20:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 20:14:53 -- accel/accel.sh@20 -- # val= 00:08:23.284 20:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 20:14:53 -- accel/accel.sh@20 -- # val= 00:08:23.284 20:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 20:14:53 -- accel/accel.sh@20 -- # val= 00:08:23.284 20:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 20:14:53 -- accel/accel.sh@20 -- # val= 00:08:23.284 20:14:53 -- accel/accel.sh@21 -- # case "$var" in 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # IFS=: 00:08:23.284 20:14:53 -- accel/accel.sh@19 -- # read -r var val 00:08:23.284 20:14:53 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:23.284 20:14:53 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:23.284 20:14:53 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:23.284 00:08:23.284 real 0m2.731s 00:08:23.284 user 0m7.815s 00:08:23.284 sys 0m0.234s 00:08:23.284 ************************************ 00:08:23.284 END TEST accel_decomp_mcore 00:08:23.284 ************************************ 00:08:23.284 20:14:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:23.284 20:14:53 -- common/autotest_common.sh@10 -- # set +x 00:08:23.284 20:14:53 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:23.284 20:14:53 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:23.284 20:14:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:23.284 20:14:53 -- common/autotest_common.sh@10 -- # set +x 00:08:23.543 ************************************ 00:08:23.543 START TEST accel_decomp_full_mcore 00:08:23.543 ************************************ 00:08:23.543 20:14:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:23.543 20:14:53 -- accel/accel.sh@16 -- # local accel_opc 00:08:23.543 20:14:53 -- accel/accel.sh@17 -- # local accel_module 00:08:23.543 20:14:53 -- accel/accel.sh@19 -- # IFS=: 00:08:23.543 20:14:53 -- accel/accel.sh@19 -- # read -r var val 00:08:23.543 20:14:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:23.543 20:14:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:23.543 20:14:53 -- accel/accel.sh@12 -- # build_accel_config 00:08:23.543 20:14:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:23.543 20:14:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:23.543 20:14:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:23.543 20:14:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:23.543 20:14:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:23.543 20:14:53 -- accel/accel.sh@40 -- # local IFS=, 00:08:23.543 20:14:53 -- accel/accel.sh@41 -- # jq -r . 00:08:23.543 [2024-04-24 20:14:53.590068] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:23.543 [2024-04-24 20:14:53.590186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65812 ] 00:08:23.543 [2024-04-24 20:14:53.763797] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.802 [2024-04-24 20:14:54.007118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.802 [2024-04-24 20:14:54.007294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.802 [2024-04-24 20:14:54.007475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.802 [2024-04-24 20:14:54.007499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.060 20:14:54 -- accel/accel.sh@20 -- # val= 00:08:24.060 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.060 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.060 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.060 20:14:54 -- accel/accel.sh@20 -- # val= 00:08:24.060 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.060 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.060 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.060 20:14:54 -- accel/accel.sh@20 -- # val= 00:08:24.060 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.060 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.060 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.060 20:14:54 -- accel/accel.sh@20 -- # val=0xf 00:08:24.060 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.060 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.060 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.060 20:14:54 -- accel/accel.sh@20 -- # val= 00:08:24.060 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.060 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.060 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.060 20:14:54 -- accel/accel.sh@20 -- # val= 00:08:24.060 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.060 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.060 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.060 20:14:54 -- accel/accel.sh@20 -- # val=decompress 00:08:24.061 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 20:14:54 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 20:14:54 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:24.061 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 20:14:54 -- accel/accel.sh@20 -- # val= 00:08:24.061 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 20:14:54 -- accel/accel.sh@20 -- # val=software 00:08:24.061 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 20:14:54 -- accel/accel.sh@22 -- # accel_module=software 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 20:14:54 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:24.061 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 20:14:54 -- accel/accel.sh@20 -- # val=32 00:08:24.061 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 20:14:54 -- accel/accel.sh@20 -- # val=32 00:08:24.061 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 20:14:54 -- accel/accel.sh@20 -- # val=1 00:08:24.061 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 20:14:54 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.061 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 20:14:54 -- accel/accel.sh@20 -- # val=Yes 00:08:24.061 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 20:14:54 -- accel/accel.sh@20 -- # val= 00:08:24.061 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:24.061 20:14:54 -- accel/accel.sh@20 -- # val= 00:08:24.061 20:14:54 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # IFS=: 00:08:24.061 20:14:54 -- accel/accel.sh@19 -- # read -r var val 00:08:26.598 20:14:56 -- accel/accel.sh@20 -- # val= 00:08:26.598 20:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.598 20:14:56 -- accel/accel.sh@19 -- # IFS=: 00:08:26.598 20:14:56 -- accel/accel.sh@19 -- # read -r var val 00:08:26.598 20:14:56 -- accel/accel.sh@20 -- # val= 00:08:26.598 20:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # IFS=: 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # read -r var val 00:08:26.599 20:14:56 -- accel/accel.sh@20 -- # val= 00:08:26.599 20:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # IFS=: 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # read -r var val 00:08:26.599 20:14:56 -- accel/accel.sh@20 -- # val= 00:08:26.599 20:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # IFS=: 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # read -r var val 00:08:26.599 20:14:56 -- accel/accel.sh@20 -- # val= 00:08:26.599 20:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # IFS=: 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # read -r var val 00:08:26.599 20:14:56 -- accel/accel.sh@20 -- # val= 00:08:26.599 20:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # IFS=: 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # read -r var val 00:08:26.599 20:14:56 -- accel/accel.sh@20 -- # val= 00:08:26.599 20:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # IFS=: 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # read -r var val 00:08:26.599 20:14:56 -- accel/accel.sh@20 -- # val= 00:08:26.599 20:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # IFS=: 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # read -r var val 00:08:26.599 20:14:56 -- accel/accel.sh@20 -- # val= 00:08:26.599 20:14:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # IFS=: 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # read -r var val 00:08:26.599 20:14:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:26.599 20:14:56 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:26.599 ************************************ 00:08:26.599 END TEST accel_decomp_full_mcore 00:08:26.599 ************************************ 00:08:26.599 20:14:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:26.599 00:08:26.599 real 0m2.794s 00:08:26.599 user 0m8.073s 00:08:26.599 sys 0m0.218s 00:08:26.599 20:14:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:26.599 20:14:56 -- common/autotest_common.sh@10 -- # set +x 00:08:26.599 20:14:56 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:26.599 20:14:56 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:26.599 20:14:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.599 20:14:56 -- common/autotest_common.sh@10 -- # set +x 00:08:26.599 ************************************ 00:08:26.599 START TEST accel_decomp_mthread 00:08:26.599 ************************************ 00:08:26.599 20:14:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:26.599 20:14:56 -- accel/accel.sh@16 -- # local accel_opc 00:08:26.599 20:14:56 -- accel/accel.sh@17 -- # local accel_module 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # IFS=: 00:08:26.599 20:14:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:26.599 20:14:56 -- accel/accel.sh@19 -- # read -r var val 00:08:26.599 20:14:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:08:26.599 20:14:56 -- accel/accel.sh@12 -- # build_accel_config 00:08:26.599 20:14:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.599 20:14:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:26.599 20:14:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:26.599 20:14:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.599 20:14:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.599 20:14:56 -- accel/accel.sh@40 -- # local IFS=, 00:08:26.599 20:14:56 -- accel/accel.sh@41 -- # jq -r . 00:08:26.599 [2024-04-24 20:14:56.531158] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:26.599 [2024-04-24 20:14:56.531261] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65865 ] 00:08:26.599 [2024-04-24 20:14:56.705920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.856 [2024-04-24 20:14:56.932306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.114 20:14:57 -- accel/accel.sh@20 -- # val= 00:08:27.114 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.114 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.114 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.114 20:14:57 -- accel/accel.sh@20 -- # val= 00:08:27.114 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.114 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.114 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.114 20:14:57 -- accel/accel.sh@20 -- # val= 00:08:27.114 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.114 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.114 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.114 20:14:57 -- accel/accel.sh@20 -- # val=0x1 00:08:27.114 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.114 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.114 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.114 20:14:57 -- accel/accel.sh@20 -- # val= 00:08:27.114 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.114 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.114 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.114 20:14:57 -- accel/accel.sh@20 -- # val= 00:08:27.114 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.114 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.114 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.114 20:14:57 -- accel/accel.sh@20 -- # val=decompress 00:08:27.114 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.115 20:14:57 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.115 20:14:57 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:27.115 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.115 20:14:57 -- accel/accel.sh@20 -- # val= 00:08:27.115 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.115 20:14:57 -- accel/accel.sh@20 -- # val=software 00:08:27.115 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.115 20:14:57 -- accel/accel.sh@22 -- # accel_module=software 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.115 20:14:57 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:27.115 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.115 20:14:57 -- accel/accel.sh@20 -- # val=32 00:08:27.115 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.115 20:14:57 -- accel/accel.sh@20 -- # val=32 00:08:27.115 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.115 20:14:57 -- accel/accel.sh@20 -- # val=2 00:08:27.115 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.115 20:14:57 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:27.115 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.115 20:14:57 -- accel/accel.sh@20 -- # val=Yes 00:08:27.115 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.115 20:14:57 -- accel/accel.sh@20 -- # val= 00:08:27.115 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:27.115 20:14:57 -- accel/accel.sh@20 -- # val= 00:08:27.115 20:14:57 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # IFS=: 00:08:27.115 20:14:57 -- accel/accel.sh@19 -- # read -r var val 00:08:29.019 20:14:59 -- accel/accel.sh@20 -- # val= 00:08:29.019 20:14:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.019 20:14:59 -- accel/accel.sh@19 -- # IFS=: 00:08:29.019 20:14:59 -- accel/accel.sh@19 -- # read -r var val 00:08:29.019 20:14:59 -- accel/accel.sh@20 -- # val= 00:08:29.019 20:14:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.019 20:14:59 -- accel/accel.sh@19 -- # IFS=: 00:08:29.019 20:14:59 -- accel/accel.sh@19 -- # read -r var val 00:08:29.019 20:14:59 -- accel/accel.sh@20 -- # val= 00:08:29.019 20:14:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.019 20:14:59 -- accel/accel.sh@19 -- # IFS=: 00:08:29.019 20:14:59 -- accel/accel.sh@19 -- # read -r var val 00:08:29.019 20:14:59 -- accel/accel.sh@20 -- # val= 00:08:29.019 20:14:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.019 20:14:59 -- accel/accel.sh@19 -- # IFS=: 00:08:29.019 20:14:59 -- accel/accel.sh@19 -- # read -r var val 00:08:29.019 20:14:59 -- accel/accel.sh@20 -- # val= 00:08:29.019 20:14:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.019 20:14:59 -- accel/accel.sh@19 -- # IFS=: 00:08:29.019 20:14:59 -- accel/accel.sh@19 -- # read -r var val 00:08:29.019 20:14:59 -- accel/accel.sh@20 -- # val= 00:08:29.019 20:14:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.019 20:14:59 -- accel/accel.sh@19 -- # IFS=: 00:08:29.019 20:14:59 -- accel/accel.sh@19 -- # read -r var val 00:08:29.019 20:14:59 -- accel/accel.sh@20 -- # val= 00:08:29.019 20:14:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:29.019 20:14:59 -- accel/accel.sh@19 -- # IFS=: 00:08:29.019 20:14:59 -- accel/accel.sh@19 -- # read -r var val 00:08:29.019 20:14:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:29.019 20:14:59 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:29.019 20:14:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:29.019 00:08:29.019 real 0m2.707s 00:08:29.019 user 0m2.422s 00:08:29.019 sys 0m0.196s 00:08:29.019 20:14:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:29.019 ************************************ 00:08:29.019 END TEST accel_decomp_mthread 00:08:29.019 ************************************ 00:08:29.019 20:14:59 -- common/autotest_common.sh@10 -- # set +x 00:08:29.019 20:14:59 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.019 20:14:59 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:29.019 20:14:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.019 20:14:59 -- common/autotest_common.sh@10 -- # set +x 00:08:29.278 ************************************ 00:08:29.278 START TEST accel_deomp_full_mthread 00:08:29.278 ************************************ 00:08:29.278 20:14:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.278 20:14:59 -- accel/accel.sh@16 -- # local accel_opc 00:08:29.278 20:14:59 -- accel/accel.sh@17 -- # local accel_module 00:08:29.278 20:14:59 -- accel/accel.sh@19 -- # IFS=: 00:08:29.278 20:14:59 -- accel/accel.sh@19 -- # read -r var val 00:08:29.278 20:14:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.278 20:14:59 -- accel/accel.sh@12 -- # build_accel_config 00:08:29.278 20:14:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:08:29.278 20:14:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:29.278 20:14:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:29.278 20:14:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:29.278 20:14:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:29.278 20:14:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:29.278 20:14:59 -- accel/accel.sh@40 -- # local IFS=, 00:08:29.278 20:14:59 -- accel/accel.sh@41 -- # jq -r . 00:08:29.278 [2024-04-24 20:14:59.394179] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:29.278 [2024-04-24 20:14:59.394298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65918 ] 00:08:29.537 [2024-04-24 20:14:59.563407] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.796 [2024-04-24 20:14:59.804722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val= 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val= 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val= 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val=0x1 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val= 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val= 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val=decompress 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@23 -- # accel_opc=decompress 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val='111250 bytes' 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val= 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val=software 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@22 -- # accel_module=software 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val=32 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val=32 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val=2 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val=Yes 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val= 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:30.055 20:15:00 -- accel/accel.sh@20 -- # val= 00:08:30.055 20:15:00 -- accel/accel.sh@21 -- # case "$var" in 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # IFS=: 00:08:30.055 20:15:00 -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 20:15:02 -- accel/accel.sh@20 -- # val= 00:08:31.961 20:15:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.961 20:15:02 -- accel/accel.sh@19 -- # IFS=: 00:08:31.961 20:15:02 -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 20:15:02 -- accel/accel.sh@20 -- # val= 00:08:31.961 20:15:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.961 20:15:02 -- accel/accel.sh@19 -- # IFS=: 00:08:31.961 20:15:02 -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 20:15:02 -- accel/accel.sh@20 -- # val= 00:08:31.961 20:15:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.961 20:15:02 -- accel/accel.sh@19 -- # IFS=: 00:08:31.961 20:15:02 -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 20:15:02 -- accel/accel.sh@20 -- # val= 00:08:31.961 20:15:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.961 20:15:02 -- accel/accel.sh@19 -- # IFS=: 00:08:31.961 20:15:02 -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 20:15:02 -- accel/accel.sh@20 -- # val= 00:08:31.961 20:15:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.961 20:15:02 -- accel/accel.sh@19 -- # IFS=: 00:08:31.961 20:15:02 -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 20:15:02 -- accel/accel.sh@20 -- # val= 00:08:31.961 20:15:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.961 20:15:02 -- accel/accel.sh@19 -- # IFS=: 00:08:31.961 20:15:02 -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 20:15:02 -- accel/accel.sh@20 -- # val= 00:08:31.961 20:15:02 -- accel/accel.sh@21 -- # case "$var" in 00:08:31.961 20:15:02 -- accel/accel.sh@19 -- # IFS=: 00:08:31.961 20:15:02 -- accel/accel.sh@19 -- # read -r var val 00:08:31.961 20:15:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:31.961 20:15:02 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:08:31.961 20:15:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:31.961 00:08:31.961 real 0m2.742s 00:08:31.961 user 0m2.471s 00:08:31.961 sys 0m0.182s 00:08:31.961 20:15:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:31.961 ************************************ 00:08:31.961 END TEST accel_deomp_full_mthread 00:08:31.961 ************************************ 00:08:31.961 20:15:02 -- common/autotest_common.sh@10 -- # set +x 00:08:31.961 20:15:02 -- accel/accel.sh@124 -- # [[ n == y ]] 00:08:31.961 20:15:02 -- accel/accel.sh@137 -- # build_accel_config 00:08:31.961 20:15:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:31.961 20:15:02 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:31.961 20:15:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:08:31.962 20:15:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:31.962 20:15:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:31.962 20:15:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:31.962 20:15:02 -- accel/accel.sh@40 -- # local IFS=, 00:08:31.962 20:15:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:31.962 20:15:02 -- accel/accel.sh@41 -- # jq -r . 00:08:31.962 20:15:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:31.962 20:15:02 -- common/autotest_common.sh@10 -- # set +x 00:08:32.221 ************************************ 00:08:32.221 START TEST accel_dif_functional_tests 00:08:32.221 ************************************ 00:08:32.221 20:15:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:32.221 [2024-04-24 20:15:02.314911] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:32.221 [2024-04-24 20:15:02.315018] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65975 ] 00:08:32.479 [2024-04-24 20:15:02.486501] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:32.745 [2024-04-24 20:15:02.718743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.745 [2024-04-24 20:15:02.718947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.745 [2024-04-24 20:15:02.719048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.004 00:08:33.004 00:08:33.004 CUnit - A unit testing framework for C - Version 2.1-3 00:08:33.004 http://cunit.sourceforge.net/ 00:08:33.004 00:08:33.004 00:08:33.004 Suite: accel_dif 00:08:33.004 Test: verify: DIF generated, GUARD check ...passed 00:08:33.004 Test: verify: DIF generated, APPTAG check ...passed 00:08:33.004 Test: verify: DIF generated, REFTAG check ...passed 00:08:33.004 Test: verify: DIF not generated, GUARD check ...passed 00:08:33.004 Test: verify: DIF not generated, APPTAG check ...passed 00:08:33.004 Test: verify: DIF not generated, REFTAG check ...passed 00:08:33.004 Test: verify: APPTAG correct, APPTAG check ...[2024-04-24 20:15:03.081416] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:33.004 [2024-04-24 20:15:03.081694] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:33.004 [2024-04-24 20:15:03.081758] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:33.004 [2024-04-24 20:15:03.081799] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:33.004 [2024-04-24 20:15:03.081839] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:33.004 [2024-04-24 20:15:03.081882] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:33.004 passed 00:08:33.004 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:08:33.004 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:33.004 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:33.004 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:33.004 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:08:33.004 Test: generate copy: DIF generated, GUARD check ...[2024-04-24 20:15:03.082192] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:33.004 [2024-04-24 20:15:03.082459] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:33.004 passed 00:08:33.004 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:33.004 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:33.004 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:33.004 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:33.004 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:33.004 Test: generate copy: iovecs-len validate ...passed 00:08:33.004 Test: generate copy: buffer alignment validate ...[2024-04-24 20:15:03.082877] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:33.004 passed 00:08:33.004 00:08:33.004 Run Summary: Type Total Ran Passed Failed Inactive 00:08:33.004 suites 1 1 n/a 0 0 00:08:33.004 tests 20 20 20 0 0 00:08:33.004 asserts 204 204 204 0 n/a 00:08:33.004 00:08:33.004 Elapsed time = 0.005 seconds 00:08:34.393 00:08:34.393 real 0m2.144s 00:08:34.393 user 0m4.128s 00:08:34.393 sys 0m0.258s 00:08:34.393 ************************************ 00:08:34.393 END TEST accel_dif_functional_tests 00:08:34.393 ************************************ 00:08:34.393 20:15:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:34.393 20:15:04 -- common/autotest_common.sh@10 -- # set +x 00:08:34.393 00:08:34.393 real 1m9.408s 00:08:34.393 user 1m13.176s 00:08:34.393 sys 0m7.463s 00:08:34.393 20:15:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:34.393 20:15:04 -- common/autotest_common.sh@10 -- # set +x 00:08:34.393 ************************************ 00:08:34.393 END TEST accel 00:08:34.393 ************************************ 00:08:34.393 20:15:04 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:34.393 20:15:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:34.393 20:15:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:34.393 20:15:04 -- common/autotest_common.sh@10 -- # set +x 00:08:34.393 ************************************ 00:08:34.393 START TEST accel_rpc 00:08:34.393 ************************************ 00:08:34.393 20:15:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:08:34.651 * Looking for test storage... 00:08:34.651 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:08:34.651 20:15:04 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:34.651 20:15:04 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=66062 00:08:34.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.651 20:15:04 -- accel/accel_rpc.sh@15 -- # waitforlisten 66062 00:08:34.651 20:15:04 -- common/autotest_common.sh@817 -- # '[' -z 66062 ']' 00:08:34.651 20:15:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.651 20:15:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:34.651 20:15:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.651 20:15:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:34.651 20:15:04 -- common/autotest_common.sh@10 -- # set +x 00:08:34.651 20:15:04 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:34.651 [2024-04-24 20:15:04.792338] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:34.651 [2024-04-24 20:15:04.792451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66062 ] 00:08:34.909 [2024-04-24 20:15:04.962369] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.175 [2024-04-24 20:15:05.247486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.448 20:15:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:35.448 20:15:05 -- common/autotest_common.sh@850 -- # return 0 00:08:35.448 20:15:05 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:35.448 20:15:05 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:35.448 20:15:05 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:35.448 20:15:05 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:35.448 20:15:05 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:35.448 20:15:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:35.448 20:15:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:35.448 20:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:35.705 ************************************ 00:08:35.705 START TEST accel_assign_opcode 00:08:35.705 ************************************ 00:08:35.705 20:15:05 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:08:35.705 20:15:05 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:35.705 20:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.705 20:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:35.705 [2024-04-24 20:15:05.691736] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:35.705 20:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.705 20:15:05 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:35.705 20:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.705 20:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:35.705 [2024-04-24 20:15:05.699667] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:35.705 20:15:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:35.705 20:15:05 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:35.705 20:15:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:35.705 20:15:05 -- common/autotest_common.sh@10 -- # set +x 00:08:36.641 20:15:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.641 20:15:06 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:36.641 20:15:06 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:36.641 20:15:06 -- accel/accel_rpc.sh@42 -- # grep software 00:08:36.641 20:15:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:36.641 20:15:06 -- common/autotest_common.sh@10 -- # set +x 00:08:36.641 20:15:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:36.641 software 00:08:36.641 00:08:36.641 real 0m1.026s 00:08:36.641 user 0m0.049s 00:08:36.641 sys 0m0.012s 00:08:36.641 20:15:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:36.641 ************************************ 00:08:36.641 END TEST accel_assign_opcode 00:08:36.641 ************************************ 00:08:36.641 20:15:06 -- common/autotest_common.sh@10 -- # set +x 00:08:36.641 20:15:06 -- accel/accel_rpc.sh@55 -- # killprocess 66062 00:08:36.641 20:15:06 -- common/autotest_common.sh@936 -- # '[' -z 66062 ']' 00:08:36.641 20:15:06 -- common/autotest_common.sh@940 -- # kill -0 66062 00:08:36.641 20:15:06 -- common/autotest_common.sh@941 -- # uname 00:08:36.641 20:15:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:36.641 20:15:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66062 00:08:36.641 killing process with pid 66062 00:08:36.641 20:15:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:36.641 20:15:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:36.641 20:15:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66062' 00:08:36.641 20:15:06 -- common/autotest_common.sh@955 -- # kill 66062 00:08:36.641 20:15:06 -- common/autotest_common.sh@960 -- # wait 66062 00:08:39.174 00:08:39.174 real 0m4.669s 00:08:39.174 user 0m4.543s 00:08:39.174 sys 0m0.626s 00:08:39.174 ************************************ 00:08:39.174 END TEST accel_rpc 00:08:39.174 ************************************ 00:08:39.174 20:15:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:39.174 20:15:09 -- common/autotest_common.sh@10 -- # set +x 00:08:39.174 20:15:09 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:39.174 20:15:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:39.174 20:15:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:39.174 20:15:09 -- common/autotest_common.sh@10 -- # set +x 00:08:39.174 ************************************ 00:08:39.174 START TEST app_cmdline 00:08:39.174 ************************************ 00:08:39.174 20:15:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:39.433 * Looking for test storage... 00:08:39.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:39.433 20:15:09 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:39.433 20:15:09 -- app/cmdline.sh@17 -- # spdk_tgt_pid=66197 00:08:39.433 20:15:09 -- app/cmdline.sh@18 -- # waitforlisten 66197 00:08:39.433 20:15:09 -- common/autotest_common.sh@817 -- # '[' -z 66197 ']' 00:08:39.433 20:15:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.433 20:15:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:39.433 20:15:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.433 20:15:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:39.433 20:15:09 -- common/autotest_common.sh@10 -- # set +x 00:08:39.433 20:15:09 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:39.433 [2024-04-24 20:15:09.650295] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:39.433 [2024-04-24 20:15:09.650444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66197 ] 00:08:39.711 [2024-04-24 20:15:09.831081] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.975 [2024-04-24 20:15:10.071564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.912 20:15:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:40.912 20:15:11 -- common/autotest_common.sh@850 -- # return 0 00:08:40.912 20:15:11 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:41.170 { 00:08:41.170 "version": "SPDK v24.05-pre git sha1 be7d3cb46", 00:08:41.170 "fields": { 00:08:41.170 "major": 24, 00:08:41.170 "minor": 5, 00:08:41.170 "patch": 0, 00:08:41.170 "suffix": "-pre", 00:08:41.170 "commit": "be7d3cb46" 00:08:41.170 } 00:08:41.170 } 00:08:41.170 20:15:11 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:41.170 20:15:11 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:41.170 20:15:11 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:41.170 20:15:11 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:41.170 20:15:11 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:41.170 20:15:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:41.170 20:15:11 -- common/autotest_common.sh@10 -- # set +x 00:08:41.170 20:15:11 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:41.170 20:15:11 -- app/cmdline.sh@26 -- # sort 00:08:41.170 20:15:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:41.170 20:15:11 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:41.170 20:15:11 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:41.170 20:15:11 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:41.170 20:15:11 -- common/autotest_common.sh@638 -- # local es=0 00:08:41.170 20:15:11 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:41.170 20:15:11 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.170 20:15:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:41.170 20:15:11 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.170 20:15:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:41.170 20:15:11 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.170 20:15:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:41.170 20:15:11 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:41.170 20:15:11 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:41.170 20:15:11 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:41.429 request: 00:08:41.429 { 00:08:41.429 "method": "env_dpdk_get_mem_stats", 00:08:41.429 "req_id": 1 00:08:41.429 } 00:08:41.429 Got JSON-RPC error response 00:08:41.429 response: 00:08:41.429 { 00:08:41.429 "code": -32601, 00:08:41.429 "message": "Method not found" 00:08:41.429 } 00:08:41.429 20:15:11 -- common/autotest_common.sh@641 -- # es=1 00:08:41.429 20:15:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:41.429 20:15:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:41.429 20:15:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:41.429 20:15:11 -- app/cmdline.sh@1 -- # killprocess 66197 00:08:41.429 20:15:11 -- common/autotest_common.sh@936 -- # '[' -z 66197 ']' 00:08:41.429 20:15:11 -- common/autotest_common.sh@940 -- # kill -0 66197 00:08:41.429 20:15:11 -- common/autotest_common.sh@941 -- # uname 00:08:41.429 20:15:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:41.429 20:15:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66197 00:08:41.429 killing process with pid 66197 00:08:41.429 20:15:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:41.429 20:15:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:41.429 20:15:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66197' 00:08:41.429 20:15:11 -- common/autotest_common.sh@955 -- # kill 66197 00:08:41.429 20:15:11 -- common/autotest_common.sh@960 -- # wait 66197 00:08:43.987 00:08:43.987 real 0m4.566s 00:08:43.987 user 0m4.720s 00:08:43.987 sys 0m0.620s 00:08:43.987 20:15:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:43.987 20:15:13 -- common/autotest_common.sh@10 -- # set +x 00:08:43.987 ************************************ 00:08:43.987 END TEST app_cmdline 00:08:43.987 ************************************ 00:08:43.987 20:15:14 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:43.987 20:15:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:43.987 20:15:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.987 20:15:14 -- common/autotest_common.sh@10 -- # set +x 00:08:43.987 ************************************ 00:08:43.987 START TEST version 00:08:43.987 ************************************ 00:08:43.987 20:15:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:44.245 * Looking for test storage... 00:08:44.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:44.245 20:15:14 -- app/version.sh@17 -- # get_header_version major 00:08:44.245 20:15:14 -- app/version.sh@14 -- # tr -d '"' 00:08:44.245 20:15:14 -- app/version.sh@14 -- # cut -f2 00:08:44.245 20:15:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:44.245 20:15:14 -- app/version.sh@17 -- # major=24 00:08:44.245 20:15:14 -- app/version.sh@18 -- # get_header_version minor 00:08:44.245 20:15:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:44.245 20:15:14 -- app/version.sh@14 -- # cut -f2 00:08:44.245 20:15:14 -- app/version.sh@14 -- # tr -d '"' 00:08:44.245 20:15:14 -- app/version.sh@18 -- # minor=5 00:08:44.245 20:15:14 -- app/version.sh@19 -- # get_header_version patch 00:08:44.245 20:15:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:44.245 20:15:14 -- app/version.sh@14 -- # cut -f2 00:08:44.245 20:15:14 -- app/version.sh@14 -- # tr -d '"' 00:08:44.246 20:15:14 -- app/version.sh@19 -- # patch=0 00:08:44.246 20:15:14 -- app/version.sh@20 -- # get_header_version suffix 00:08:44.246 20:15:14 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:44.246 20:15:14 -- app/version.sh@14 -- # tr -d '"' 00:08:44.246 20:15:14 -- app/version.sh@14 -- # cut -f2 00:08:44.246 20:15:14 -- app/version.sh@20 -- # suffix=-pre 00:08:44.246 20:15:14 -- app/version.sh@22 -- # version=24.5 00:08:44.246 20:15:14 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:44.246 20:15:14 -- app/version.sh@28 -- # version=24.5rc0 00:08:44.246 20:15:14 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:44.246 20:15:14 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:44.246 20:15:14 -- app/version.sh@30 -- # py_version=24.5rc0 00:08:44.246 20:15:14 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:08:44.246 00:08:44.246 real 0m0.238s 00:08:44.246 user 0m0.125s 00:08:44.246 sys 0m0.162s 00:08:44.246 ************************************ 00:08:44.246 END TEST version 00:08:44.246 ************************************ 00:08:44.246 20:15:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:44.246 20:15:14 -- common/autotest_common.sh@10 -- # set +x 00:08:44.246 20:15:14 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:08:44.246 20:15:14 -- spdk/autotest.sh@194 -- # uname -s 00:08:44.246 20:15:14 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:44.246 20:15:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:44.246 20:15:14 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:44.246 20:15:14 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:08:44.246 20:15:14 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:44.246 20:15:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:44.246 20:15:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:44.246 20:15:14 -- common/autotest_common.sh@10 -- # set +x 00:08:44.505 ************************************ 00:08:44.505 START TEST blockdev_nvme 00:08:44.505 ************************************ 00:08:44.505 20:15:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:44.505 * Looking for test storage... 00:08:44.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:44.505 20:15:14 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:44.505 20:15:14 -- bdev/nbd_common.sh@6 -- # set -e 00:08:44.505 20:15:14 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:44.505 20:15:14 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:44.505 20:15:14 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:44.505 20:15:14 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:44.505 20:15:14 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:44.505 20:15:14 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:44.505 20:15:14 -- bdev/blockdev.sh@20 -- # : 00:08:44.505 20:15:14 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:44.505 20:15:14 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:44.505 20:15:14 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:44.505 20:15:14 -- bdev/blockdev.sh@674 -- # uname -s 00:08:44.505 20:15:14 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:44.505 20:15:14 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:44.505 20:15:14 -- bdev/blockdev.sh@682 -- # test_type=nvme 00:08:44.505 20:15:14 -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:44.505 20:15:14 -- bdev/blockdev.sh@684 -- # dek= 00:08:44.505 20:15:14 -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:44.505 20:15:14 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:44.505 20:15:14 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:44.505 20:15:14 -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:08:44.505 20:15:14 -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:08:44.505 20:15:14 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:44.505 20:15:14 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=66382 00:08:44.505 20:15:14 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:44.505 20:15:14 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:44.505 20:15:14 -- bdev/blockdev.sh@49 -- # waitforlisten 66382 00:08:44.505 20:15:14 -- common/autotest_common.sh@817 -- # '[' -z 66382 ']' 00:08:44.505 20:15:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.505 20:15:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:44.505 20:15:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.505 20:15:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:44.505 20:15:14 -- common/autotest_common.sh@10 -- # set +x 00:08:44.764 [2024-04-24 20:15:14.772721] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:44.764 [2024-04-24 20:15:14.773011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66382 ] 00:08:44.764 [2024-04-24 20:15:14.941339] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.024 [2024-04-24 20:15:15.187038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.963 20:15:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:45.963 20:15:16 -- common/autotest_common.sh@850 -- # return 0 00:08:45.963 20:15:16 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:45.963 20:15:16 -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:08:45.963 20:15:16 -- bdev/blockdev.sh@81 -- # local json 00:08:45.963 20:15:16 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:45.963 20:15:16 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:46.222 20:15:16 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:46.222 20:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.222 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:08:46.482 20:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.482 20:15:16 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:46.482 20:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.482 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:08:46.482 20:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.482 20:15:16 -- bdev/blockdev.sh@740 -- # cat 00:08:46.482 20:15:16 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:46.482 20:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.482 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:08:46.482 20:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.482 20:15:16 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:46.482 20:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.482 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:08:46.482 20:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.482 20:15:16 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:46.482 20:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.482 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:08:46.482 20:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.482 20:15:16 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:46.482 20:15:16 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:46.482 20:15:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:46.482 20:15:16 -- common/autotest_common.sh@10 -- # set +x 00:08:46.482 20:15:16 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:46.482 20:15:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:46.742 20:15:16 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:46.742 20:15:16 -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:46.742 20:15:16 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "73518a35-f857-4bc2-b331-608ad863d17f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "73518a35-f857-4bc2-b331-608ad863d17f",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "de39659c-075f-42f9-ae44-5f4cdbd72c40"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "de39659c-075f-42f9-ae44-5f4cdbd72c40",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "db73ca12-55be-407a-82d3-509e5ca15937"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "db73ca12-55be-407a-82d3-509e5ca15937",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "71599a0f-bf29-4982-9e84-1505059a8dc5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "71599a0f-bf29-4982-9e84-1505059a8dc5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "72e50c45-81b3-4d61-9974-c627922eb337"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "72e50c45-81b3-4d61-9974-c627922eb337",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "125f71f1-aa09-42a4-b38e-add1c6da2979"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "125f71f1-aa09-42a4-b38e-add1c6da2979",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:46.742 20:15:16 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:46.742 20:15:16 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:08:46.742 20:15:16 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:46.742 20:15:16 -- bdev/blockdev.sh@754 -- # killprocess 66382 00:08:46.742 20:15:16 -- common/autotest_common.sh@936 -- # '[' -z 66382 ']' 00:08:46.742 20:15:16 -- common/autotest_common.sh@940 -- # kill -0 66382 00:08:46.742 20:15:16 -- common/autotest_common.sh@941 -- # uname 00:08:46.742 20:15:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:46.742 20:15:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66382 00:08:46.742 killing process with pid 66382 00:08:46.742 20:15:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:46.742 20:15:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:46.742 20:15:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66382' 00:08:46.742 20:15:16 -- common/autotest_common.sh@955 -- # kill 66382 00:08:46.742 20:15:16 -- common/autotest_common.sh@960 -- # wait 66382 00:08:49.365 20:15:19 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:49.365 20:15:19 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:49.365 20:15:19 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:49.365 20:15:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.365 20:15:19 -- common/autotest_common.sh@10 -- # set +x 00:08:49.365 ************************************ 00:08:49.365 START TEST bdev_hello_world 00:08:49.365 ************************************ 00:08:49.365 20:15:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:49.365 [2024-04-24 20:15:19.410431] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:49.365 [2024-04-24 20:15:19.410547] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66493 ] 00:08:49.365 [2024-04-24 20:15:19.580137] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.623 [2024-04-24 20:15:19.816791] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.559 [2024-04-24 20:15:20.505767] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:50.559 [2024-04-24 20:15:20.505826] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:50.559 [2024-04-24 20:15:20.505849] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:50.559 [2024-04-24 20:15:20.508788] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:50.559 [2024-04-24 20:15:20.509545] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:50.559 [2024-04-24 20:15:20.509592] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:50.559 [2024-04-24 20:15:20.509803] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:50.559 00:08:50.559 [2024-04-24 20:15:20.509829] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:51.496 00:08:51.496 real 0m2.347s 00:08:51.496 user 0m1.984s 00:08:51.496 sys 0m0.251s 00:08:51.496 20:15:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:51.496 20:15:21 -- common/autotest_common.sh@10 -- # set +x 00:08:51.496 ************************************ 00:08:51.496 END TEST bdev_hello_world 00:08:51.496 ************************************ 00:08:51.496 20:15:21 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:51.496 20:15:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:51.496 20:15:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.496 20:15:21 -- common/autotest_common.sh@10 -- # set +x 00:08:51.755 ************************************ 00:08:51.755 START TEST bdev_bounds 00:08:51.755 ************************************ 00:08:51.755 20:15:21 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:08:51.755 20:15:21 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:51.755 20:15:21 -- bdev/blockdev.sh@290 -- # bdevio_pid=66539 00:08:51.755 20:15:21 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:51.755 Process bdevio pid: 66539 00:08:51.755 20:15:21 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 66539' 00:08:51.755 20:15:21 -- bdev/blockdev.sh@293 -- # waitforlisten 66539 00:08:51.755 20:15:21 -- common/autotest_common.sh@817 -- # '[' -z 66539 ']' 00:08:51.755 20:15:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.755 20:15:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:51.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.755 20:15:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.755 20:15:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:51.755 20:15:21 -- common/autotest_common.sh@10 -- # set +x 00:08:51.755 [2024-04-24 20:15:21.926049] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:51.755 [2024-04-24 20:15:21.926167] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66539 ] 00:08:52.015 [2024-04-24 20:15:22.098535] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:52.274 [2024-04-24 20:15:22.344320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.274 [2024-04-24 20:15:22.344474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.274 [2024-04-24 20:15:22.344503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.841 20:15:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:52.841 20:15:23 -- common/autotest_common.sh@850 -- # return 0 00:08:52.841 20:15:23 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:53.101 I/O targets: 00:08:53.101 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:53.101 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:53.101 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:53.101 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:53.101 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:53.101 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:53.101 00:08:53.101 00:08:53.101 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.101 http://cunit.sourceforge.net/ 00:08:53.101 00:08:53.101 00:08:53.101 Suite: bdevio tests on: Nvme3n1 00:08:53.101 Test: blockdev write read block ...passed 00:08:53.101 Test: blockdev write zeroes read block ...passed 00:08:53.101 Test: blockdev write zeroes read no split ...passed 00:08:53.101 Test: blockdev write zeroes read split ...passed 00:08:53.101 Test: blockdev write zeroes read split partial ...passed 00:08:53.101 Test: blockdev reset ...[2024-04-24 20:15:23.236033] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:53.101 passed 00:08:53.101 Test: blockdev write read 8 blocks ...[2024-04-24 20:15:23.239590] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:53.101 passed 00:08:53.101 Test: blockdev write read size > 128k ...passed 00:08:53.101 Test: blockdev write read invalid size ...passed 00:08:53.101 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:53.101 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:53.101 Test: blockdev write read max offset ...passed 00:08:53.101 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:53.101 Test: blockdev writev readv 8 blocks ...passed 00:08:53.101 Test: blockdev writev readv 30 x 1block ...passed 00:08:53.101 Test: blockdev writev readv block ...passed 00:08:53.101 Test: blockdev writev readv size > 128k ...passed 00:08:53.101 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:53.101 Test: blockdev comparev and writev ...[2024-04-24 20:15:23.248880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28c00e000 len:0x1000 00:08:53.101 [2024-04-24 20:15:23.248928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:53.101 passed 00:08:53.101 Test: blockdev nvme passthru rw ...passed 00:08:53.101 Test: blockdev nvme passthru vendor specific ...[2024-04-24 20:15:23.249882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:53.101 passed 00:08:53.101 Test: blockdev nvme admin passthru ...[2024-04-24 20:15:23.249915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:53.101 passed 00:08:53.101 Test: blockdev copy ...passed 00:08:53.101 Suite: bdevio tests on: Nvme2n3 00:08:53.101 Test: blockdev write read block ...passed 00:08:53.101 Test: blockdev write zeroes read block ...passed 00:08:53.101 Test: blockdev write zeroes read no split ...passed 00:08:53.101 Test: blockdev write zeroes read split ...passed 00:08:53.361 Test: blockdev write zeroes read split partial ...passed 00:08:53.361 Test: blockdev reset ...[2024-04-24 20:15:23.360144] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:53.361 passed 00:08:53.361 Test: blockdev write read 8 blocks ...[2024-04-24 20:15:23.363998] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:53.361 passed 00:08:53.361 Test: blockdev write read size > 128k ...passed 00:08:53.361 Test: blockdev write read invalid size ...passed 00:08:53.361 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:53.361 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:53.361 Test: blockdev write read max offset ...passed 00:08:53.361 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:53.361 Test: blockdev writev readv 8 blocks ...passed 00:08:53.361 Test: blockdev writev readv 30 x 1block ...passed 00:08:53.361 Test: blockdev writev readv block ...passed 00:08:53.361 Test: blockdev writev readv size > 128k ...passed 00:08:53.361 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:53.361 Test: blockdev comparev and writev ...[2024-04-24 20:15:23.372366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28c00a000 len:0x1000 00:08:53.361 [2024-04-24 20:15:23.372416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:53.361 passed 00:08:53.361 Test: blockdev nvme passthru rw ...passed 00:08:53.361 Test: blockdev nvme passthru vendor specific ...[2024-04-24 20:15:23.373233] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:53.361 [2024-04-24 20:15:23.373264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:53.361 passed 00:08:53.361 Test: blockdev nvme admin passthru ...passed 00:08:53.361 Test: blockdev copy ...passed 00:08:53.361 Suite: bdevio tests on: Nvme2n2 00:08:53.361 Test: blockdev write read block ...passed 00:08:53.361 Test: blockdev write zeroes read block ...passed 00:08:53.361 Test: blockdev write zeroes read no split ...passed 00:08:53.361 Test: blockdev write zeroes read split ...passed 00:08:53.361 Test: blockdev write zeroes read split partial ...passed 00:08:53.361 Test: blockdev reset ...[2024-04-24 20:15:23.454654] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:53.361 passed 00:08:53.361 Test: blockdev write read 8 blocks ...[2024-04-24 20:15:23.458310] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:53.361 passed 00:08:53.361 Test: blockdev write read size > 128k ...passed 00:08:53.361 Test: blockdev write read invalid size ...passed 00:08:53.361 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:53.361 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:53.361 Test: blockdev write read max offset ...passed 00:08:53.361 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:53.361 Test: blockdev writev readv 8 blocks ...passed 00:08:53.361 Test: blockdev writev readv 30 x 1block ...passed 00:08:53.361 Test: blockdev writev readv block ...passed 00:08:53.361 Test: blockdev writev readv size > 128k ...passed 00:08:53.361 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:53.361 Test: blockdev comparev and writev ...[2024-04-24 20:15:23.466419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x280606000 len:0x1000 00:08:53.361 [2024-04-24 20:15:23.466467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:53.361 passed 00:08:53.361 Test: blockdev nvme passthru rw ...passed 00:08:53.361 Test: blockdev nvme passthru vendor specific ...[2024-04-24 20:15:23.467343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:53.361 [2024-04-24 20:15:23.467377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:53.361 passed 00:08:53.361 Test: blockdev nvme admin passthru ...passed 00:08:53.361 Test: blockdev copy ...passed 00:08:53.361 Suite: bdevio tests on: Nvme2n1 00:08:53.361 Test: blockdev write read block ...passed 00:08:53.361 Test: blockdev write zeroes read block ...passed 00:08:53.361 Test: blockdev write zeroes read no split ...passed 00:08:53.361 Test: blockdev write zeroes read split ...passed 00:08:53.361 Test: blockdev write zeroes read split partial ...passed 00:08:53.361 Test: blockdev reset ...[2024-04-24 20:15:23.547587] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:53.361 passed 00:08:53.361 Test: blockdev write read 8 blocks ...[2024-04-24 20:15:23.551529] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:53.361 passed 00:08:53.361 Test: blockdev write read size > 128k ...passed 00:08:53.361 Test: blockdev write read invalid size ...passed 00:08:53.361 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:53.361 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:53.361 Test: blockdev write read max offset ...passed 00:08:53.361 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:53.361 Test: blockdev writev readv 8 blocks ...passed 00:08:53.361 Test: blockdev writev readv 30 x 1block ...passed 00:08:53.361 Test: blockdev writev readv block ...passed 00:08:53.361 Test: blockdev writev readv size > 128k ...passed 00:08:53.361 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:53.361 Test: blockdev comparev and writev ...[2024-04-24 20:15:23.562740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x280601000 len:0x1000 00:08:53.361 [2024-04-24 20:15:23.562794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:53.361 passed 00:08:53.361 Test: blockdev nvme passthru rw ...passed 00:08:53.361 Test: blockdev nvme passthru vendor specific ...[2024-04-24 20:15:23.563954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:53.362 [2024-04-24 20:15:23.563992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:53.362 passed 00:08:53.362 Test: blockdev nvme admin passthru ...passed 00:08:53.362 Test: blockdev copy ...passed 00:08:53.362 Suite: bdevio tests on: Nvme1n1 00:08:53.362 Test: blockdev write read block ...passed 00:08:53.362 Test: blockdev write zeroes read block ...passed 00:08:53.362 Test: blockdev write zeroes read no split ...passed 00:08:53.661 Test: blockdev write zeroes read split ...passed 00:08:53.661 Test: blockdev write zeroes read split partial ...passed 00:08:53.661 Test: blockdev reset ...[2024-04-24 20:15:23.653829] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:53.661 passed 00:08:53.661 Test: blockdev write read 8 blocks ...[2024-04-24 20:15:23.657428] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:53.661 passed 00:08:53.661 Test: blockdev write read size > 128k ...passed 00:08:53.661 Test: blockdev write read invalid size ...passed 00:08:53.661 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:53.661 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:53.661 Test: blockdev write read max offset ...passed 00:08:53.661 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:53.661 Test: blockdev writev readv 8 blocks ...passed 00:08:53.661 Test: blockdev writev readv 30 x 1block ...passed 00:08:53.661 Test: blockdev writev readv block ...passed 00:08:53.661 Test: blockdev writev readv size > 128k ...passed 00:08:53.661 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:53.661 Test: blockdev comparev and writev ...[2024-04-24 20:15:23.665340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x290206000 len:0x1000 00:08:53.661 [2024-04-24 20:15:23.665391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:53.661 passed 00:08:53.661 Test: blockdev nvme passthru rw ...passed 00:08:53.661 Test: blockdev nvme passthru vendor specific ...[2024-04-24 20:15:23.666379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:53.661 [2024-04-24 20:15:23.666415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:53.661 passed 00:08:53.661 Test: blockdev nvme admin passthru ...passed 00:08:53.661 Test: blockdev copy ...passed 00:08:53.661 Suite: bdevio tests on: Nvme0n1 00:08:53.661 Test: blockdev write read block ...passed 00:08:53.661 Test: blockdev write zeroes read block ...passed 00:08:53.661 Test: blockdev write zeroes read no split ...passed 00:08:53.661 Test: blockdev write zeroes read split ...passed 00:08:53.661 Test: blockdev write zeroes read split partial ...passed 00:08:53.661 Test: blockdev reset ...[2024-04-24 20:15:23.751288] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:53.661 passed 00:08:53.661 Test: blockdev write read 8 blocks ...[2024-04-24 20:15:23.754792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:53.661 passed 00:08:53.661 Test: blockdev write read size > 128k ...passed 00:08:53.661 Test: blockdev write read invalid size ...passed 00:08:53.661 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:53.661 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:53.661 Test: blockdev write read max offset ...passed 00:08:53.661 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:53.661 Test: blockdev writev readv 8 blocks ...passed 00:08:53.661 Test: blockdev writev readv 30 x 1block ...passed 00:08:53.661 Test: blockdev writev readv block ...passed 00:08:53.661 Test: blockdev writev readv size > 128k ...passed 00:08:53.661 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:53.661 Test: blockdev comparev and writev ...[2024-04-24 20:15:23.761992] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:53.661 separate metadata which is not supported yet. 00:08:53.661 passed 00:08:53.661 Test: blockdev nvme passthru rw ...passed 00:08:53.661 Test: blockdev nvme passthru vendor specific ...[2024-04-24 20:15:23.762644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:53.661 [2024-04-24 20:15:23.762685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:53.661 passed 00:08:53.661 Test: blockdev nvme admin passthru ...passed 00:08:53.661 Test: blockdev copy ...passed 00:08:53.661 00:08:53.661 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.661 suites 6 6 n/a 0 0 00:08:53.661 tests 138 138 138 0 0 00:08:53.661 asserts 893 893 893 0 n/a 00:08:53.661 00:08:53.661 Elapsed time = 1.670 seconds 00:08:53.661 0 00:08:53.661 20:15:23 -- bdev/blockdev.sh@295 -- # killprocess 66539 00:08:53.661 20:15:23 -- common/autotest_common.sh@936 -- # '[' -z 66539 ']' 00:08:53.661 20:15:23 -- common/autotest_common.sh@940 -- # kill -0 66539 00:08:53.661 20:15:23 -- common/autotest_common.sh@941 -- # uname 00:08:53.661 20:15:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:53.661 20:15:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66539 00:08:53.661 20:15:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:53.661 20:15:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:53.661 killing process with pid 66539 00:08:53.661 20:15:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66539' 00:08:53.661 20:15:23 -- common/autotest_common.sh@955 -- # kill 66539 00:08:53.661 20:15:23 -- common/autotest_common.sh@960 -- # wait 66539 00:08:55.046 20:15:24 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:55.046 00:08:55.046 real 0m3.115s 00:08:55.046 user 0m7.628s 00:08:55.046 sys 0m0.417s 00:08:55.046 20:15:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:55.046 20:15:24 -- common/autotest_common.sh@10 -- # set +x 00:08:55.046 ************************************ 00:08:55.047 END TEST bdev_bounds 00:08:55.047 ************************************ 00:08:55.047 20:15:25 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:55.047 20:15:25 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:08:55.047 20:15:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.047 20:15:25 -- common/autotest_common.sh@10 -- # set +x 00:08:55.047 ************************************ 00:08:55.047 START TEST bdev_nbd 00:08:55.047 ************************************ 00:08:55.047 20:15:25 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:55.047 20:15:25 -- bdev/blockdev.sh@300 -- # uname -s 00:08:55.047 20:15:25 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:55.047 20:15:25 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.047 20:15:25 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:55.047 20:15:25 -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:55.047 20:15:25 -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:55.047 20:15:25 -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:08:55.047 20:15:25 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:55.047 20:15:25 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:55.047 20:15:25 -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:55.047 20:15:25 -- bdev/blockdev.sh@312 -- # bdev_num=6 00:08:55.047 20:15:25 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:55.047 20:15:25 -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:55.047 20:15:25 -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:55.047 20:15:25 -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:55.047 20:15:25 -- bdev/blockdev.sh@318 -- # nbd_pid=66608 00:08:55.047 20:15:25 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:55.047 20:15:25 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:55.047 20:15:25 -- bdev/blockdev.sh@320 -- # waitforlisten 66608 /var/tmp/spdk-nbd.sock 00:08:55.047 20:15:25 -- common/autotest_common.sh@817 -- # '[' -z 66608 ']' 00:08:55.047 20:15:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:55.047 20:15:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:55.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:55.047 20:15:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:55.047 20:15:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:55.047 20:15:25 -- common/autotest_common.sh@10 -- # set +x 00:08:55.047 [2024-04-24 20:15:25.210304] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:08:55.047 [2024-04-24 20:15:25.210410] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.306 [2024-04-24 20:15:25.380889] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.566 [2024-04-24 20:15:25.661074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.504 20:15:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:56.504 20:15:26 -- common/autotest_common.sh@850 -- # return 0 00:08:56.504 20:15:26 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@24 -- # local i 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:56.504 20:15:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:08:56.504 20:15:26 -- common/autotest_common.sh@855 -- # local i 00:08:56.504 20:15:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:56.504 20:15:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:56.504 20:15:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:08:56.504 20:15:26 -- common/autotest_common.sh@859 -- # break 00:08:56.504 20:15:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:56.504 20:15:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:56.504 20:15:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.504 1+0 records in 00:08:56.504 1+0 records out 00:08:56.504 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045466 s, 9.0 MB/s 00:08:56.504 20:15:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.504 20:15:26 -- common/autotest_common.sh@872 -- # size=4096 00:08:56.504 20:15:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.504 20:15:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:56.504 20:15:26 -- common/autotest_common.sh@875 -- # return 0 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:56.504 20:15:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:56.763 20:15:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:56.763 20:15:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:56.763 20:15:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:56.763 20:15:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:08:56.763 20:15:26 -- common/autotest_common.sh@855 -- # local i 00:08:56.763 20:15:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:56.763 20:15:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:56.763 20:15:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:08:56.763 20:15:26 -- common/autotest_common.sh@859 -- # break 00:08:56.763 20:15:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:56.763 20:15:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:56.763 20:15:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:56.763 1+0 records in 00:08:56.763 1+0 records out 00:08:56.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490393 s, 8.4 MB/s 00:08:56.763 20:15:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.763 20:15:26 -- common/autotest_common.sh@872 -- # size=4096 00:08:56.763 20:15:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:56.763 20:15:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:56.763 20:15:26 -- common/autotest_common.sh@875 -- # return 0 00:08:56.763 20:15:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:56.763 20:15:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:56.763 20:15:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:57.021 20:15:27 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:57.021 20:15:27 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:57.021 20:15:27 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:57.021 20:15:27 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:08:57.021 20:15:27 -- common/autotest_common.sh@855 -- # local i 00:08:57.021 20:15:27 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:57.021 20:15:27 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:57.021 20:15:27 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:08:57.021 20:15:27 -- common/autotest_common.sh@859 -- # break 00:08:57.021 20:15:27 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:57.021 20:15:27 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:57.021 20:15:27 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.021 1+0 records in 00:08:57.021 1+0 records out 00:08:57.021 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691945 s, 5.9 MB/s 00:08:57.021 20:15:27 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.021 20:15:27 -- common/autotest_common.sh@872 -- # size=4096 00:08:57.021 20:15:27 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.021 20:15:27 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:57.021 20:15:27 -- common/autotest_common.sh@875 -- # return 0 00:08:57.021 20:15:27 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:57.021 20:15:27 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:57.021 20:15:27 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:57.281 20:15:27 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:57.281 20:15:27 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:57.281 20:15:27 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:57.281 20:15:27 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:08:57.281 20:15:27 -- common/autotest_common.sh@855 -- # local i 00:08:57.281 20:15:27 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:57.281 20:15:27 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:57.281 20:15:27 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:08:57.281 20:15:27 -- common/autotest_common.sh@859 -- # break 00:08:57.281 20:15:27 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:57.281 20:15:27 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:57.281 20:15:27 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.281 1+0 records in 00:08:57.281 1+0 records out 00:08:57.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715552 s, 5.7 MB/s 00:08:57.281 20:15:27 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.281 20:15:27 -- common/autotest_common.sh@872 -- # size=4096 00:08:57.281 20:15:27 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.281 20:15:27 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:57.281 20:15:27 -- common/autotest_common.sh@875 -- # return 0 00:08:57.281 20:15:27 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:57.281 20:15:27 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:57.281 20:15:27 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:57.540 20:15:27 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:57.540 20:15:27 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:57.540 20:15:27 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:57.540 20:15:27 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:08:57.540 20:15:27 -- common/autotest_common.sh@855 -- # local i 00:08:57.540 20:15:27 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:57.540 20:15:27 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:57.540 20:15:27 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:08:57.540 20:15:27 -- common/autotest_common.sh@859 -- # break 00:08:57.540 20:15:27 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:57.540 20:15:27 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:57.540 20:15:27 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.540 1+0 records in 00:08:57.540 1+0 records out 00:08:57.540 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057209 s, 7.2 MB/s 00:08:57.540 20:15:27 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.540 20:15:27 -- common/autotest_common.sh@872 -- # size=4096 00:08:57.540 20:15:27 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.540 20:15:27 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:57.540 20:15:27 -- common/autotest_common.sh@875 -- # return 0 00:08:57.540 20:15:27 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:57.540 20:15:27 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:57.540 20:15:27 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:57.800 20:15:27 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:57.800 20:15:27 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:57.800 20:15:27 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:57.800 20:15:27 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:08:57.800 20:15:27 -- common/autotest_common.sh@855 -- # local i 00:08:57.800 20:15:27 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:57.800 20:15:27 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:57.800 20:15:27 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:08:57.800 20:15:27 -- common/autotest_common.sh@859 -- # break 00:08:57.800 20:15:27 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:57.800 20:15:27 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:57.800 20:15:27 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:57.800 1+0 records in 00:08:57.800 1+0 records out 00:08:57.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000916904 s, 4.5 MB/s 00:08:57.800 20:15:27 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.800 20:15:27 -- common/autotest_common.sh@872 -- # size=4096 00:08:57.800 20:15:27 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:57.800 20:15:27 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:57.800 20:15:27 -- common/autotest_common.sh@875 -- # return 0 00:08:57.800 20:15:27 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:57.800 20:15:27 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:57.800 20:15:27 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:57.800 20:15:28 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:57.800 { 00:08:57.800 "nbd_device": "/dev/nbd0", 00:08:57.800 "bdev_name": "Nvme0n1" 00:08:57.800 }, 00:08:57.800 { 00:08:57.800 "nbd_device": "/dev/nbd1", 00:08:57.800 "bdev_name": "Nvme1n1" 00:08:57.800 }, 00:08:57.800 { 00:08:57.800 "nbd_device": "/dev/nbd2", 00:08:57.800 "bdev_name": "Nvme2n1" 00:08:57.800 }, 00:08:57.800 { 00:08:57.800 "nbd_device": "/dev/nbd3", 00:08:57.800 "bdev_name": "Nvme2n2" 00:08:57.800 }, 00:08:57.800 { 00:08:57.800 "nbd_device": "/dev/nbd4", 00:08:57.800 "bdev_name": "Nvme2n3" 00:08:57.800 }, 00:08:57.800 { 00:08:57.800 "nbd_device": "/dev/nbd5", 00:08:57.800 "bdev_name": "Nvme3n1" 00:08:57.800 } 00:08:57.800 ]' 00:08:57.800 20:15:28 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:57.800 20:15:28 -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:57.800 { 00:08:57.800 "nbd_device": "/dev/nbd0", 00:08:57.800 "bdev_name": "Nvme0n1" 00:08:57.800 }, 00:08:57.800 { 00:08:57.800 "nbd_device": "/dev/nbd1", 00:08:57.800 "bdev_name": "Nvme1n1" 00:08:57.800 }, 00:08:57.800 { 00:08:57.800 "nbd_device": "/dev/nbd2", 00:08:57.800 "bdev_name": "Nvme2n1" 00:08:57.800 }, 00:08:57.800 { 00:08:57.800 "nbd_device": "/dev/nbd3", 00:08:57.800 "bdev_name": "Nvme2n2" 00:08:57.800 }, 00:08:57.800 { 00:08:57.800 "nbd_device": "/dev/nbd4", 00:08:57.800 "bdev_name": "Nvme2n3" 00:08:57.800 }, 00:08:57.800 { 00:08:57.800 "nbd_device": "/dev/nbd5", 00:08:57.800 "bdev_name": "Nvme3n1" 00:08:57.800 } 00:08:57.800 ]' 00:08:57.800 20:15:28 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@51 -- # local i 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@41 -- # break 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.060 20:15:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:58.319 20:15:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:58.319 20:15:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:58.319 20:15:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:58.319 20:15:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.319 20:15:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.319 20:15:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:58.319 20:15:28 -- bdev/nbd_common.sh@41 -- # break 00:08:58.319 20:15:28 -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.319 20:15:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.319 20:15:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:58.578 20:15:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:58.578 20:15:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:58.578 20:15:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:58.578 20:15:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.578 20:15:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.578 20:15:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:58.578 20:15:28 -- bdev/nbd_common.sh@41 -- # break 00:08:58.578 20:15:28 -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.578 20:15:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.578 20:15:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:58.842 20:15:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:58.842 20:15:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:58.842 20:15:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:58.842 20:15:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.842 20:15:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.842 20:15:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:58.842 20:15:28 -- bdev/nbd_common.sh@41 -- # break 00:08:58.842 20:15:28 -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.842 20:15:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.842 20:15:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:59.101 20:15:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:59.101 20:15:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:59.101 20:15:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:59.101 20:15:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.101 20:15:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.101 20:15:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:59.101 20:15:29 -- bdev/nbd_common.sh@41 -- # break 00:08:59.101 20:15:29 -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.101 20:15:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:59.101 20:15:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:59.101 20:15:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@41 -- # break 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@45 -- # return 0 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:59.360 20:15:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@65 -- # true 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@65 -- # count=0 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@122 -- # count=0 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@127 -- # return 0 00:08:59.619 20:15:29 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@12 -- # local i 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:59.619 /dev/nbd0 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:59.619 20:15:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:59.619 20:15:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:08:59.619 20:15:29 -- common/autotest_common.sh@855 -- # local i 00:08:59.619 20:15:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:59.620 20:15:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:59.620 20:15:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:08:59.620 20:15:29 -- common/autotest_common.sh@859 -- # break 00:08:59.620 20:15:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:59.620 20:15:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:59.620 20:15:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.620 1+0 records in 00:08:59.620 1+0 records out 00:08:59.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541092 s, 7.6 MB/s 00:08:59.620 20:15:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.620 20:15:29 -- common/autotest_common.sh@872 -- # size=4096 00:08:59.620 20:15:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.620 20:15:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:59.620 20:15:29 -- common/autotest_common.sh@875 -- # return 0 00:08:59.620 20:15:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:59.620 20:15:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:59.620 20:15:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:59.879 /dev/nbd1 00:08:59.879 20:15:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:59.879 20:15:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:59.879 20:15:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:08:59.879 20:15:30 -- common/autotest_common.sh@855 -- # local i 00:08:59.879 20:15:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:59.879 20:15:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:59.879 20:15:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:08:59.879 20:15:30 -- common/autotest_common.sh@859 -- # break 00:08:59.879 20:15:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:59.879 20:15:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:59.879 20:15:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:59.879 1+0 records in 00:08:59.879 1+0 records out 00:08:59.879 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051524 s, 7.9 MB/s 00:08:59.879 20:15:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.879 20:15:30 -- common/autotest_common.sh@872 -- # size=4096 00:08:59.879 20:15:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:59.879 20:15:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:59.879 20:15:30 -- common/autotest_common.sh@875 -- # return 0 00:08:59.879 20:15:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:59.879 20:15:30 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:59.879 20:15:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:00.138 /dev/nbd10 00:09:00.138 20:15:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:00.138 20:15:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:00.138 20:15:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:09:00.138 20:15:30 -- common/autotest_common.sh@855 -- # local i 00:09:00.138 20:15:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:00.138 20:15:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:00.138 20:15:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:09:00.138 20:15:30 -- common/autotest_common.sh@859 -- # break 00:09:00.138 20:15:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:00.138 20:15:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:00.138 20:15:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.138 1+0 records in 00:09:00.138 1+0 records out 00:09:00.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000645526 s, 6.3 MB/s 00:09:00.138 20:15:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.138 20:15:30 -- common/autotest_common.sh@872 -- # size=4096 00:09:00.138 20:15:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.138 20:15:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:00.139 20:15:30 -- common/autotest_common.sh@875 -- # return 0 00:09:00.139 20:15:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.139 20:15:30 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:00.139 20:15:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:00.397 /dev/nbd11 00:09:00.397 20:15:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:00.397 20:15:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:00.397 20:15:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:09:00.397 20:15:30 -- common/autotest_common.sh@855 -- # local i 00:09:00.397 20:15:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:00.397 20:15:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:00.397 20:15:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:09:00.397 20:15:30 -- common/autotest_common.sh@859 -- # break 00:09:00.397 20:15:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:00.397 20:15:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:00.397 20:15:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.397 1+0 records in 00:09:00.397 1+0 records out 00:09:00.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458984 s, 8.9 MB/s 00:09:00.397 20:15:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.397 20:15:30 -- common/autotest_common.sh@872 -- # size=4096 00:09:00.397 20:15:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.397 20:15:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:00.397 20:15:30 -- common/autotest_common.sh@875 -- # return 0 00:09:00.397 20:15:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.397 20:15:30 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:00.397 20:15:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:00.656 /dev/nbd12 00:09:00.656 20:15:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:00.657 20:15:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:00.657 20:15:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:09:00.657 20:15:30 -- common/autotest_common.sh@855 -- # local i 00:09:00.657 20:15:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:00.657 20:15:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:00.657 20:15:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:09:00.657 20:15:30 -- common/autotest_common.sh@859 -- # break 00:09:00.657 20:15:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:00.657 20:15:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:00.657 20:15:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.657 1+0 records in 00:09:00.657 1+0 records out 00:09:00.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000808184 s, 5.1 MB/s 00:09:00.657 20:15:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.657 20:15:30 -- common/autotest_common.sh@872 -- # size=4096 00:09:00.657 20:15:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.657 20:15:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:00.657 20:15:30 -- common/autotest_common.sh@875 -- # return 0 00:09:00.657 20:15:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.657 20:15:30 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:00.657 20:15:30 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:00.916 /dev/nbd13 00:09:00.916 20:15:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:00.916 20:15:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:00.916 20:15:30 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:09:00.916 20:15:30 -- common/autotest_common.sh@855 -- # local i 00:09:00.916 20:15:30 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:00.916 20:15:30 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:00.916 20:15:30 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:09:00.916 20:15:30 -- common/autotest_common.sh@859 -- # break 00:09:00.916 20:15:30 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:00.916 20:15:30 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:00.916 20:15:30 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:00.916 1+0 records in 00:09:00.916 1+0 records out 00:09:00.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676698 s, 6.1 MB/s 00:09:00.916 20:15:30 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.916 20:15:30 -- common/autotest_common.sh@872 -- # size=4096 00:09:00.916 20:15:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:00.916 20:15:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:00.916 20:15:30 -- common/autotest_common.sh@875 -- # return 0 00:09:00.916 20:15:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:00.916 20:15:30 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:00.916 20:15:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:00.916 20:15:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:00.916 20:15:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:00.916 20:15:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:00.916 { 00:09:00.916 "nbd_device": "/dev/nbd0", 00:09:00.916 "bdev_name": "Nvme0n1" 00:09:00.916 }, 00:09:00.916 { 00:09:00.916 "nbd_device": "/dev/nbd1", 00:09:00.916 "bdev_name": "Nvme1n1" 00:09:00.916 }, 00:09:00.916 { 00:09:00.916 "nbd_device": "/dev/nbd10", 00:09:00.916 "bdev_name": "Nvme2n1" 00:09:00.916 }, 00:09:00.916 { 00:09:00.916 "nbd_device": "/dev/nbd11", 00:09:00.916 "bdev_name": "Nvme2n2" 00:09:00.916 }, 00:09:00.916 { 00:09:00.916 "nbd_device": "/dev/nbd12", 00:09:00.916 "bdev_name": "Nvme2n3" 00:09:00.916 }, 00:09:00.916 { 00:09:00.916 "nbd_device": "/dev/nbd13", 00:09:00.916 "bdev_name": "Nvme3n1" 00:09:00.916 } 00:09:00.916 ]' 00:09:00.916 20:15:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:00.916 { 00:09:00.916 "nbd_device": "/dev/nbd0", 00:09:00.916 "bdev_name": "Nvme0n1" 00:09:00.916 }, 00:09:00.916 { 00:09:00.916 "nbd_device": "/dev/nbd1", 00:09:00.916 "bdev_name": "Nvme1n1" 00:09:00.916 }, 00:09:00.916 { 00:09:00.916 "nbd_device": "/dev/nbd10", 00:09:00.916 "bdev_name": "Nvme2n1" 00:09:00.916 }, 00:09:00.916 { 00:09:00.916 "nbd_device": "/dev/nbd11", 00:09:00.916 "bdev_name": "Nvme2n2" 00:09:00.916 }, 00:09:00.916 { 00:09:00.916 "nbd_device": "/dev/nbd12", 00:09:00.916 "bdev_name": "Nvme2n3" 00:09:00.916 }, 00:09:00.916 { 00:09:00.916 "nbd_device": "/dev/nbd13", 00:09:00.916 "bdev_name": "Nvme3n1" 00:09:00.916 } 00:09:00.916 ]' 00:09:00.916 20:15:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:01.175 /dev/nbd1 00:09:01.175 /dev/nbd10 00:09:01.175 /dev/nbd11 00:09:01.175 /dev/nbd12 00:09:01.175 /dev/nbd13' 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:01.175 /dev/nbd1 00:09:01.175 /dev/nbd10 00:09:01.175 /dev/nbd11 00:09:01.175 /dev/nbd12 00:09:01.175 /dev/nbd13' 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@65 -- # count=6 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@66 -- # echo 6 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@95 -- # count=6 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:01.175 256+0 records in 00:09:01.175 256+0 records out 00:09:01.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00966826 s, 108 MB/s 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:01.175 256+0 records in 00:09:01.175 256+0 records out 00:09:01.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136601 s, 7.7 MB/s 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:01.175 20:15:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:01.433 256+0 records in 00:09:01.433 256+0 records out 00:09:01.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116833 s, 9.0 MB/s 00:09:01.433 20:15:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:01.433 20:15:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:01.433 256+0 records in 00:09:01.433 256+0 records out 00:09:01.433 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115473 s, 9.1 MB/s 00:09:01.433 20:15:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:01.433 20:15:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:01.693 256+0 records in 00:09:01.693 256+0 records out 00:09:01.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114968 s, 9.1 MB/s 00:09:01.693 20:15:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:01.693 20:15:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:01.693 256+0 records in 00:09:01.693 256+0 records out 00:09:01.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114315 s, 9.2 MB/s 00:09:01.693 20:15:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:01.693 20:15:31 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:01.952 256+0 records in 00:09:01.952 256+0 records out 00:09:01.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128663 s, 8.1 MB/s 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@51 -- # local i 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:01.952 20:15:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:01.952 20:15:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@41 -- # break 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@41 -- # break 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.211 20:15:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:02.470 20:15:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:02.470 20:15:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:02.470 20:15:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:02.470 20:15:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.470 20:15:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.470 20:15:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:02.470 20:15:32 -- bdev/nbd_common.sh@41 -- # break 00:09:02.470 20:15:32 -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.470 20:15:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.470 20:15:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:02.732 20:15:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@41 -- # break 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@45 -- # return 0 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:02.733 20:15:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:03.000 20:15:32 -- bdev/nbd_common.sh@41 -- # break 00:09:03.000 20:15:32 -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.000 20:15:32 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:03.000 20:15:32 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:03.000 20:15:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:03.000 20:15:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:03.000 20:15:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:03.000 20:15:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:03.000 20:15:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:03.000 20:15:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:03.000 20:15:33 -- bdev/nbd_common.sh@41 -- # break 00:09:03.000 20:15:33 -- bdev/nbd_common.sh@45 -- # return 0 00:09:03.000 20:15:33 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:03.000 20:15:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.000 20:15:33 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:03.259 20:15:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:03.259 20:15:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:03.259 20:15:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:03.259 20:15:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:03.259 20:15:33 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:03.259 20:15:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:03.259 20:15:33 -- bdev/nbd_common.sh@65 -- # true 00:09:03.260 20:15:33 -- bdev/nbd_common.sh@65 -- # count=0 00:09:03.260 20:15:33 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:03.260 20:15:33 -- bdev/nbd_common.sh@104 -- # count=0 00:09:03.260 20:15:33 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:03.260 20:15:33 -- bdev/nbd_common.sh@109 -- # return 0 00:09:03.260 20:15:33 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:03.260 20:15:33 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:03.260 20:15:33 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:03.260 20:15:33 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:03.260 20:15:33 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:03.260 20:15:33 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:03.518 malloc_lvol_verify 00:09:03.518 20:15:33 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:03.777 19bafdbe-5483-4aa5-8939-483a686fae58 00:09:03.777 20:15:33 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:03.777 2cddd42c-22e3-4b52-a23f-1cd047262191 00:09:03.777 20:15:33 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:04.037 /dev/nbd0 00:09:04.037 20:15:34 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:04.037 mke2fs 1.46.5 (30-Dec-2021) 00:09:04.037 Discarding device blocks: 0/4096 done 00:09:04.037 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:04.037 00:09:04.037 Allocating group tables: 0/1 done 00:09:04.037 Writing inode tables: 0/1 done 00:09:04.037 Creating journal (1024 blocks): done 00:09:04.037 Writing superblocks and filesystem accounting information: 0/1 done 00:09:04.037 00:09:04.037 20:15:34 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:04.037 20:15:34 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:04.037 20:15:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.037 20:15:34 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:04.037 20:15:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:04.037 20:15:34 -- bdev/nbd_common.sh@51 -- # local i 00:09:04.037 20:15:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.037 20:15:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:04.297 20:15:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:04.297 20:15:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:04.297 20:15:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:04.297 20:15:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.297 20:15:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.297 20:15:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:04.297 20:15:34 -- bdev/nbd_common.sh@41 -- # break 00:09:04.297 20:15:34 -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.297 20:15:34 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:04.297 20:15:34 -- bdev/nbd_common.sh@147 -- # return 0 00:09:04.297 20:15:34 -- bdev/blockdev.sh@326 -- # killprocess 66608 00:09:04.297 20:15:34 -- common/autotest_common.sh@936 -- # '[' -z 66608 ']' 00:09:04.297 20:15:34 -- common/autotest_common.sh@940 -- # kill -0 66608 00:09:04.297 20:15:34 -- common/autotest_common.sh@941 -- # uname 00:09:04.297 20:15:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:04.297 20:15:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66608 00:09:04.297 killing process with pid 66608 00:09:04.297 20:15:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:04.297 20:15:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:04.297 20:15:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66608' 00:09:04.297 20:15:34 -- common/autotest_common.sh@955 -- # kill 66608 00:09:04.297 20:15:34 -- common/autotest_common.sh@960 -- # wait 66608 00:09:05.677 20:15:35 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:09:05.677 00:09:05.677 real 0m10.527s 00:09:05.677 user 0m13.519s 00:09:05.677 sys 0m4.080s 00:09:05.677 20:15:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:05.677 20:15:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.677 ************************************ 00:09:05.677 END TEST bdev_nbd 00:09:05.677 ************************************ 00:09:05.677 20:15:35 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:09:05.677 20:15:35 -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:09:05.677 skipping fio tests on NVMe due to multi-ns failures. 00:09:05.677 20:15:35 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:05.677 20:15:35 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:05.677 20:15:35 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:05.677 20:15:35 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:05.677 20:15:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.677 20:15:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.677 ************************************ 00:09:05.677 START TEST bdev_verify 00:09:05.677 ************************************ 00:09:05.677 20:15:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:05.677 [2024-04-24 20:15:35.861525] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:09:05.677 [2024-04-24 20:15:35.861656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67000 ] 00:09:05.935 [2024-04-24 20:15:36.033120] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:06.192 [2024-04-24 20:15:36.272841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.192 [2024-04-24 20:15:36.272903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.127 Running I/O for 5 seconds... 00:09:12.412 00:09:12.412 Latency(us) 00:09:12.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.412 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:12.412 Verification LBA range: start 0x0 length 0xbd0bd 00:09:12.412 Nvme0n1 : 5.05 1875.00 7.32 0.00 0.00 68068.97 13686.23 69905.07 00:09:12.412 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:12.412 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:12.412 Nvme0n1 : 5.05 1851.64 7.23 0.00 0.00 68825.36 14949.58 72852.87 00:09:12.412 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:12.412 Verification LBA range: start 0x0 length 0xa0000 00:09:12.412 Nvme1n1 : 5.05 1874.31 7.32 0.00 0.00 68000.44 13475.68 64851.69 00:09:12.412 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:12.412 Verification LBA range: start 0xa0000 length 0xa0000 00:09:12.412 Nvme1n1 : 5.05 1851.02 7.23 0.00 0.00 68765.47 17897.38 67799.49 00:09:12.412 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:12.412 Verification LBA range: start 0x0 length 0x80000 00:09:12.412 Nvme2n1 : 5.06 1873.78 7.32 0.00 0.00 67832.18 13265.12 60219.42 00:09:12.412 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:12.412 Verification LBA range: start 0x80000 length 0x80000 00:09:12.412 Nvme2n1 : 5.08 1864.39 7.28 0.00 0.00 68207.04 11054.27 62325.00 00:09:12.412 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:12.412 Verification LBA range: start 0x0 length 0x80000 00:09:12.412 Nvme2n2 : 5.06 1873.27 7.32 0.00 0.00 67726.70 13001.92 60640.54 00:09:12.412 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:12.412 Verification LBA range: start 0x80000 length 0x80000 00:09:12.412 Nvme2n2 : 5.08 1863.98 7.28 0.00 0.00 68096.69 11212.18 62746.11 00:09:12.412 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:12.412 Verification LBA range: start 0x0 length 0x80000 00:09:12.412 Nvme2n3 : 5.07 1882.34 7.35 0.00 0.00 67353.43 5132.34 63167.23 00:09:12.412 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:12.412 Verification LBA range: start 0x80000 length 0x80000 00:09:12.412 Nvme2n3 : 5.08 1863.55 7.28 0.00 0.00 68010.14 11054.27 64009.46 00:09:12.412 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:12.412 Verification LBA range: start 0x0 length 0x20000 00:09:12.412 Nvme3n1 : 5.07 1881.79 7.35 0.00 0.00 67264.99 4948.10 65693.92 00:09:12.412 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:12.412 Verification LBA range: start 0x20000 length 0x20000 00:09:12.412 Nvme3n1 : 5.08 1863.12 7.28 0.00 0.00 67933.05 10948.99 65272.80 00:09:12.412 =================================================================================================================== 00:09:12.412 Total : 22418.19 87.57 0.00 0.00 68004.48 4948.10 72852.87 00:09:13.782 00:09:13.782 real 0m7.946s 00:09:13.782 user 0m14.457s 00:09:13.782 sys 0m0.308s 00:09:13.782 20:15:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:13.782 ************************************ 00:09:13.782 END TEST bdev_verify 00:09:13.783 20:15:43 -- common/autotest_common.sh@10 -- # set +x 00:09:13.783 ************************************ 00:09:13.783 20:15:43 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:13.783 20:15:43 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:13.783 20:15:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:13.783 20:15:43 -- common/autotest_common.sh@10 -- # set +x 00:09:13.783 ************************************ 00:09:13.783 START TEST bdev_verify_big_io 00:09:13.783 ************************************ 00:09:13.783 20:15:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:13.783 [2024-04-24 20:15:43.958219] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:09:13.783 [2024-04-24 20:15:43.958334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67109 ] 00:09:14.040 [2024-04-24 20:15:44.130338] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:14.298 [2024-04-24 20:15:44.373935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.298 [2024-04-24 20:15:44.373961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.301 Running I/O for 5 seconds... 00:09:21.883 00:09:21.883 Latency(us) 00:09:21.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.883 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:21.883 Verification LBA range: start 0x0 length 0xbd0b 00:09:21.883 Nvme0n1 : 5.39 166.21 10.39 0.00 0.00 752289.38 31794.17 747899.99 00:09:21.883 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:21.883 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:21.883 Nvme0n1 : 5.48 163.48 10.22 0.00 0.00 762575.03 21687.42 761375.67 00:09:21.883 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:21.883 Verification LBA range: start 0x0 length 0xa000 00:09:21.883 Nvme1n1 : 5.49 167.68 10.48 0.00 0.00 724445.58 91381.92 626618.91 00:09:21.883 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:21.883 Verification LBA range: start 0xa000 length 0xa000 00:09:21.883 Nvme1n1 : 5.54 165.83 10.36 0.00 0.00 732763.91 58534.97 636725.67 00:09:21.883 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:21.883 Verification LBA range: start 0x0 length 0x8000 00:09:21.883 Nvme2n1 : 5.55 171.55 10.72 0.00 0.00 695094.75 53902.70 771482.42 00:09:21.883 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:21.883 Verification LBA range: start 0x8000 length 0x8000 00:09:21.883 Nvme2n1 : 5.58 171.91 10.74 0.00 0.00 695145.81 39374.24 646832.42 00:09:21.883 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:21.883 Verification LBA range: start 0x0 length 0x8000 00:09:21.883 Nvme2n2 : 5.58 175.06 10.94 0.00 0.00 670233.82 28635.81 997199.99 00:09:21.883 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:21.883 Verification LBA range: start 0x8000 length 0x8000 00:09:21.883 Nvme2n2 : 5.59 169.34 10.58 0.00 0.00 687782.51 40427.03 690628.37 00:09:21.883 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:21.883 Verification LBA range: start 0x0 length 0x8000 00:09:21.883 Nvme2n3 : 5.66 181.94 11.37 0.00 0.00 627269.92 27372.47 801802.69 00:09:21.883 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:21.883 Verification LBA range: start 0x8000 length 0x8000 00:09:21.883 Nvme2n3 : 5.65 172.40 10.77 0.00 0.00 661658.89 38742.57 1394732.41 00:09:21.883 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:21.883 Verification LBA range: start 0x0 length 0x2000 00:09:21.883 Nvme3n1 : 5.68 197.45 12.34 0.00 0.00 569131.75 2631.97 805171.61 00:09:21.883 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:21.883 Verification LBA range: start 0x2000 length 0x2000 00:09:21.883 Nvme3n1 : 5.68 189.42 11.84 0.00 0.00 590598.49 2934.64 1421683.77 00:09:21.883 =================================================================================================================== 00:09:21.883 Total : 2092.26 130.77 0.00 0.00 676767.77 2631.97 1421683.77 00:09:22.821 00:09:22.821 real 0m9.052s 00:09:22.821 user 0m16.610s 00:09:22.821 sys 0m0.340s 00:09:22.821 20:15:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:22.821 ************************************ 00:09:22.821 END TEST bdev_verify_big_io 00:09:22.821 ************************************ 00:09:22.821 20:15:52 -- common/autotest_common.sh@10 -- # set +x 00:09:22.821 20:15:52 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:22.821 20:15:52 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:22.821 20:15:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:22.821 20:15:52 -- common/autotest_common.sh@10 -- # set +x 00:09:23.081 ************************************ 00:09:23.081 START TEST bdev_write_zeroes 00:09:23.081 ************************************ 00:09:23.081 20:15:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:23.081 [2024-04-24 20:15:53.160754] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:09:23.081 [2024-04-24 20:15:53.160869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67231 ] 00:09:23.340 [2024-04-24 20:15:53.330783] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.340 [2024-04-24 20:15:53.560219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.277 Running I/O for 1 seconds... 00:09:25.211 00:09:25.211 Latency(us) 00:09:25.211 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.211 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:25.211 Nvme0n1 : 1.01 11129.59 43.47 0.00 0.00 11469.33 8211.74 27793.58 00:09:25.211 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:25.211 Nvme1n1 : 1.01 11117.83 43.43 0.00 0.00 11467.52 8580.22 28004.14 00:09:25.211 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:25.211 Nvme2n1 : 1.02 11150.87 43.56 0.00 0.00 11387.06 6422.00 24214.10 00:09:25.211 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:25.211 Nvme2n2 : 1.02 11139.15 43.51 0.00 0.00 11384.65 6737.84 23371.87 00:09:25.211 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:25.212 Nvme2n3 : 1.02 11128.35 43.47 0.00 0.00 11372.91 6737.84 22950.76 00:09:25.212 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:25.212 Nvme3n1 : 1.02 11181.29 43.68 0.00 0.00 11215.99 5053.38 18844.89 00:09:25.212 =================================================================================================================== 00:09:25.212 Total : 66847.07 261.12 0.00 0.00 11382.43 5053.38 28004.14 00:09:26.592 00:09:26.592 real 0m3.553s 00:09:26.592 user 0m3.176s 00:09:26.592 sys 0m0.259s 00:09:26.592 20:15:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:26.592 20:15:56 -- common/autotest_common.sh@10 -- # set +x 00:09:26.592 ************************************ 00:09:26.592 END TEST bdev_write_zeroes 00:09:26.592 ************************************ 00:09:26.592 20:15:56 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:26.592 20:15:56 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:26.592 20:15:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.592 20:15:56 -- common/autotest_common.sh@10 -- # set +x 00:09:26.592 ************************************ 00:09:26.592 START TEST bdev_json_nonenclosed 00:09:26.592 ************************************ 00:09:26.592 20:15:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:26.850 [2024-04-24 20:15:56.874247] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:09:26.850 [2024-04-24 20:15:56.874375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67290 ] 00:09:26.850 [2024-04-24 20:15:57.045953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.109 [2024-04-24 20:15:57.292261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.109 [2024-04-24 20:15:57.292363] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:27.109 [2024-04-24 20:15:57.292386] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:27.109 [2024-04-24 20:15:57.292400] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:27.678 00:09:27.678 real 0m0.969s 00:09:27.678 user 0m0.722s 00:09:27.678 sys 0m0.140s 00:09:27.678 20:15:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:27.678 20:15:57 -- common/autotest_common.sh@10 -- # set +x 00:09:27.678 ************************************ 00:09:27.678 END TEST bdev_json_nonenclosed 00:09:27.678 ************************************ 00:09:27.678 20:15:57 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:27.678 20:15:57 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:27.678 20:15:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:27.678 20:15:57 -- common/autotest_common.sh@10 -- # set +x 00:09:27.937 ************************************ 00:09:27.937 START TEST bdev_json_nonarray 00:09:27.937 ************************************ 00:09:27.937 20:15:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:27.937 [2024-04-24 20:15:58.003561] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:09:27.937 [2024-04-24 20:15:58.003685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67325 ] 00:09:28.196 [2024-04-24 20:15:58.173838] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.455 [2024-04-24 20:15:58.434954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.455 [2024-04-24 20:15:58.435076] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:28.455 [2024-04-24 20:15:58.435119] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:28.455 [2024-04-24 20:15:58.435133] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:28.713 00:09:28.713 real 0m0.981s 00:09:28.713 user 0m0.737s 00:09:28.713 sys 0m0.137s 00:09:28.713 20:15:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:28.713 ************************************ 00:09:28.713 END TEST bdev_json_nonarray 00:09:28.713 ************************************ 00:09:28.713 20:15:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.713 20:15:58 -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:09:28.713 20:15:58 -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:09:28.713 20:15:58 -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:09:28.713 20:15:58 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:28.713 20:15:58 -- bdev/blockdev.sh@811 -- # cleanup 00:09:28.713 20:15:58 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:28.971 20:15:58 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:28.971 20:15:58 -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:09:28.971 20:15:58 -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:09:28.971 20:15:58 -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:09:28.971 20:15:58 -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:09:28.971 00:09:28.971 real 0m44.440s 00:09:28.971 user 1m3.968s 00:09:28.971 sys 0m7.350s 00:09:28.971 20:15:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:28.971 20:15:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.971 ************************************ 00:09:28.971 END TEST blockdev_nvme 00:09:28.971 ************************************ 00:09:28.971 20:15:59 -- spdk/autotest.sh@209 -- # uname -s 00:09:28.971 20:15:59 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:09:28.971 20:15:59 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:28.971 20:15:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:28.971 20:15:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:28.971 20:15:59 -- common/autotest_common.sh@10 -- # set +x 00:09:28.971 ************************************ 00:09:28.971 START TEST blockdev_nvme_gpt 00:09:28.971 ************************************ 00:09:28.971 20:15:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:09:29.230 * Looking for test storage... 00:09:29.230 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:29.230 20:15:59 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:29.230 20:15:59 -- bdev/nbd_common.sh@6 -- # set -e 00:09:29.230 20:15:59 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:29.230 20:15:59 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:29.230 20:15:59 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:29.230 20:15:59 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:29.230 20:15:59 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:29.230 20:15:59 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:29.230 20:15:59 -- bdev/blockdev.sh@20 -- # : 00:09:29.230 20:15:59 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:09:29.230 20:15:59 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:09:29.230 20:15:59 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:09:29.230 20:15:59 -- bdev/blockdev.sh@674 -- # uname -s 00:09:29.230 20:15:59 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:09:29.230 20:15:59 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:09:29.230 20:15:59 -- bdev/blockdev.sh@682 -- # test_type=gpt 00:09:29.230 20:15:59 -- bdev/blockdev.sh@683 -- # crypto_device= 00:09:29.230 20:15:59 -- bdev/blockdev.sh@684 -- # dek= 00:09:29.230 20:15:59 -- bdev/blockdev.sh@685 -- # env_ctx= 00:09:29.230 20:15:59 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:09:29.230 20:15:59 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:09:29.230 20:15:59 -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:09:29.230 20:15:59 -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:09:29.230 20:15:59 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:09:29.230 20:15:59 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=67412 00:09:29.230 20:15:59 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:29.230 20:15:59 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:29.230 20:15:59 -- bdev/blockdev.sh@49 -- # waitforlisten 67412 00:09:29.230 20:15:59 -- common/autotest_common.sh@817 -- # '[' -z 67412 ']' 00:09:29.230 20:15:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.230 20:15:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:29.230 20:15:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.230 20:15:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:29.230 20:15:59 -- common/autotest_common.sh@10 -- # set +x 00:09:29.230 [2024-04-24 20:15:59.357977] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:09:29.230 [2024-04-24 20:15:59.358098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67412 ] 00:09:29.489 [2024-04-24 20:15:59.528937] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.748 [2024-04-24 20:15:59.797073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.685 20:16:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:30.685 20:16:00 -- common/autotest_common.sh@850 -- # return 0 00:09:30.685 20:16:00 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:09:30.685 20:16:00 -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:09:30.685 20:16:00 -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:31.251 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:31.251 Waiting for block devices as requested 00:09:31.510 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.510 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.510 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:31.769 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:37.044 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:37.044 20:16:06 -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:09:37.044 20:16:06 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:09:37.044 20:16:06 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:09:37.044 20:16:06 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:09:37.044 20:16:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:37.044 20:16:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:09:37.044 20:16:06 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:09:37.044 20:16:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:09:37.044 20:16:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:37.044 20:16:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:37.044 20:16:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:09:37.044 20:16:06 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:09:37.044 20:16:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:09:37.044 20:16:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:37.044 20:16:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:37.044 20:16:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:09:37.044 20:16:06 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:09:37.044 20:16:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:09:37.044 20:16:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:37.044 20:16:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:37.044 20:16:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:09:37.044 20:16:06 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:09:37.044 20:16:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:09:37.044 20:16:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:37.044 20:16:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:37.044 20:16:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:09:37.044 20:16:06 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:09:37.044 20:16:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:09:37.044 20:16:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:37.044 20:16:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:37.044 20:16:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:09:37.044 20:16:06 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:09:37.044 20:16:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:09:37.044 20:16:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:37.044 20:16:06 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:09:37.044 20:16:06 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:09:37.044 20:16:06 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:09:37.044 20:16:06 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:09:37.044 20:16:06 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:09:37.044 20:16:06 -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:09:37.044 20:16:06 -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:09:37.044 20:16:06 -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:09:37.044 20:16:06 -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:09:37.044 20:16:06 -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:09:37.044 20:16:06 -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:09:37.044 20:16:06 -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:09:37.044 20:16:06 -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:09:37.044 BYT; 00:09:37.045 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:09:37.045 20:16:06 -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:09:37.045 BYT; 00:09:37.045 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:09:37.045 20:16:06 -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:09:37.045 20:16:06 -- bdev/blockdev.sh@116 -- # break 00:09:37.045 20:16:06 -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:09:37.045 20:16:06 -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:09:37.045 20:16:06 -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:37.045 20:16:06 -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:09:37.045 20:16:06 -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:09:37.045 20:16:06 -- scripts/common.sh@408 -- # local spdk_guid 00:09:37.045 20:16:06 -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:37.045 20:16:06 -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:37.045 20:16:06 -- scripts/common.sh@413 -- # IFS='()' 00:09:37.045 20:16:06 -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:09:37.045 20:16:06 -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:37.045 20:16:06 -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:09:37.045 20:16:06 -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:37.045 20:16:06 -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:37.045 20:16:06 -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:09:37.045 20:16:07 -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:09:37.045 20:16:07 -- scripts/common.sh@420 -- # local spdk_guid 00:09:37.045 20:16:07 -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:09:37.045 20:16:07 -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:37.045 20:16:07 -- scripts/common.sh@425 -- # IFS='()' 00:09:37.045 20:16:07 -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:09:37.045 20:16:07 -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:09:37.045 20:16:07 -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:09:37.045 20:16:07 -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:37.045 20:16:07 -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:37.045 20:16:07 -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:09:37.045 20:16:07 -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:09:37.981 The operation has completed successfully. 00:09:37.981 20:16:08 -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:09:38.917 The operation has completed successfully. 00:09:38.917 20:16:09 -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:39.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:40.422 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.422 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.422 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.422 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:40.422 20:16:10 -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:09:40.422 20:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.422 20:16:10 -- common/autotest_common.sh@10 -- # set +x 00:09:40.422 [] 00:09:40.422 20:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.422 20:16:10 -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:09:40.422 20:16:10 -- bdev/blockdev.sh@81 -- # local json 00:09:40.422 20:16:10 -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:40.422 20:16:10 -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:40.423 20:16:10 -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:40.423 20:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.423 20:16:10 -- common/autotest_common.sh@10 -- # set +x 00:09:40.991 20:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.991 20:16:10 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:09:40.991 20:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.991 20:16:10 -- common/autotest_common.sh@10 -- # set +x 00:09:40.991 20:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.991 20:16:10 -- bdev/blockdev.sh@740 -- # cat 00:09:40.991 20:16:10 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:09:40.991 20:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.991 20:16:10 -- common/autotest_common.sh@10 -- # set +x 00:09:40.991 20:16:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.991 20:16:10 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:09:40.991 20:16:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.991 20:16:10 -- common/autotest_common.sh@10 -- # set +x 00:09:40.991 20:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.991 20:16:11 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:40.991 20:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.991 20:16:11 -- common/autotest_common.sh@10 -- # set +x 00:09:40.991 20:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.991 20:16:11 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:09:40.991 20:16:11 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:09:40.991 20:16:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:40.991 20:16:11 -- common/autotest_common.sh@10 -- # set +x 00:09:40.991 20:16:11 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:09:40.991 20:16:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:40.991 20:16:11 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:09:40.991 20:16:11 -- bdev/blockdev.sh@749 -- # jq -r .name 00:09:40.992 20:16:11 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "510aa9ed-e398-48d5-8352-e414ed927257"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "510aa9ed-e398-48d5-8352-e414ed927257",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "850f75da-fa82-42ab-9b06-8ba76fddcfe1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "850f75da-fa82-42ab-9b06-8ba76fddcfe1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0f9815bc-1793-4170-b54d-01f33159ce84"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0f9815bc-1793-4170-b54d-01f33159ce84",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "13efe17d-c075-4ebe-a4d0-33f994bbec82"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "13efe17d-c075-4ebe-a4d0-33f994bbec82",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8c4a6d69-4923-4a2c-a117-a4d8e8ae6880"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8c4a6d69-4923-4a2c-a117-a4d8e8ae6880",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:40.992 20:16:11 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:09:40.992 20:16:11 -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:09:40.992 20:16:11 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:09:40.992 20:16:11 -- bdev/blockdev.sh@754 -- # killprocess 67412 00:09:40.992 20:16:11 -- common/autotest_common.sh@936 -- # '[' -z 67412 ']' 00:09:40.992 20:16:11 -- common/autotest_common.sh@940 -- # kill -0 67412 00:09:40.992 20:16:11 -- common/autotest_common.sh@941 -- # uname 00:09:40.992 20:16:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:40.992 20:16:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67412 00:09:40.992 killing process with pid 67412 00:09:40.992 20:16:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:40.992 20:16:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:40.992 20:16:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67412' 00:09:40.992 20:16:11 -- common/autotest_common.sh@955 -- # kill 67412 00:09:40.992 20:16:11 -- common/autotest_common.sh@960 -- # wait 67412 00:09:43.527 20:16:13 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:43.527 20:16:13 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:43.527 20:16:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:43.527 20:16:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:43.527 20:16:13 -- common/autotest_common.sh@10 -- # set +x 00:09:43.527 ************************************ 00:09:43.527 START TEST bdev_hello_world 00:09:43.527 ************************************ 00:09:43.527 20:16:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:09:43.787 [2024-04-24 20:16:13.850683] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:09:43.787 [2024-04-24 20:16:13.850825] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68062 ] 00:09:44.046 [2024-04-24 20:16:14.023608] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.046 [2024-04-24 20:16:14.276003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.983 [2024-04-24 20:16:14.981524] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:44.983 [2024-04-24 20:16:14.981596] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:09:44.983 [2024-04-24 20:16:14.981629] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:44.983 [2024-04-24 20:16:14.984765] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:44.983 [2024-04-24 20:16:14.985388] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:44.983 [2024-04-24 20:16:14.985422] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:44.983 [2024-04-24 20:16:14.985714] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:44.983 00:09:44.983 [2024-04-24 20:16:14.985752] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:46.360 00:09:46.360 real 0m2.542s 00:09:46.360 user 0m2.167s 00:09:46.360 sys 0m0.260s 00:09:46.360 20:16:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:46.360 20:16:16 -- common/autotest_common.sh@10 -- # set +x 00:09:46.360 ************************************ 00:09:46.360 END TEST bdev_hello_world 00:09:46.360 ************************************ 00:09:46.360 20:16:16 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:09:46.361 20:16:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:46.361 20:16:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.361 20:16:16 -- common/autotest_common.sh@10 -- # set +x 00:09:46.361 ************************************ 00:09:46.361 START TEST bdev_bounds 00:09:46.361 ************************************ 00:09:46.361 20:16:16 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:09:46.361 Process bdevio pid: 68109 00:09:46.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.361 20:16:16 -- bdev/blockdev.sh@290 -- # bdevio_pid=68109 00:09:46.361 20:16:16 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:46.361 20:16:16 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:46.361 20:16:16 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68109' 00:09:46.361 20:16:16 -- bdev/blockdev.sh@293 -- # waitforlisten 68109 00:09:46.361 20:16:16 -- common/autotest_common.sh@817 -- # '[' -z 68109 ']' 00:09:46.361 20:16:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.361 20:16:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:46.361 20:16:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.361 20:16:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:46.361 20:16:16 -- common/autotest_common.sh@10 -- # set +x 00:09:46.361 [2024-04-24 20:16:16.544046] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:09:46.361 [2024-04-24 20:16:16.544257] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68109 ] 00:09:46.652 [2024-04-24 20:16:16.740487] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:46.909 [2024-04-24 20:16:17.007612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.909 [2024-04-24 20:16:17.007777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.909 [2024-04-24 20:16:17.007804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.845 20:16:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:47.845 20:16:17 -- common/autotest_common.sh@850 -- # return 0 00:09:47.845 20:16:17 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:47.845 I/O targets: 00:09:47.845 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:47.845 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:47.845 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:47.845 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:47.845 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:47.845 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:47.845 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:47.845 00:09:47.845 00:09:47.845 CUnit - A unit testing framework for C - Version 2.1-3 00:09:47.845 http://cunit.sourceforge.net/ 00:09:47.845 00:09:47.845 00:09:47.845 Suite: bdevio tests on: Nvme3n1 00:09:47.845 Test: blockdev write read block ...passed 00:09:47.845 Test: blockdev write zeroes read block ...passed 00:09:47.845 Test: blockdev write zeroes read no split ...passed 00:09:47.845 Test: blockdev write zeroes read split ...passed 00:09:47.845 Test: blockdev write zeroes read split partial ...passed 00:09:47.845 Test: blockdev reset ...[2024-04-24 20:16:18.021002] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:09:47.845 passed 00:09:47.845 Test: blockdev write read 8 blocks ...[2024-04-24 20:16:18.025166] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:47.845 passed 00:09:47.845 Test: blockdev write read size > 128k ...passed 00:09:47.845 Test: blockdev write read invalid size ...passed 00:09:47.845 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:47.845 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:47.845 Test: blockdev write read max offset ...passed 00:09:47.845 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:47.845 Test: blockdev writev readv 8 blocks ...passed 00:09:47.845 Test: blockdev writev readv 30 x 1block ...passed 00:09:47.845 Test: blockdev writev readv block ...passed 00:09:47.845 Test: blockdev writev readv size > 128k ...passed 00:09:47.845 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:47.845 Test: blockdev comparev and writev ...[2024-04-24 20:16:18.035318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x28660a000 len:0x1000 00:09:47.845 [2024-04-24 20:16:18.035420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:47.845 passed 00:09:47.845 Test: blockdev nvme passthru rw ...passed 00:09:47.845 Test: blockdev nvme passthru vendor specific ...passed 00:09:47.845 Test: blockdev nvme admin passthru ...[2024-04-24 20:16:18.036445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:47.845 [2024-04-24 20:16:18.036500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:47.845 passed 00:09:47.845 Test: blockdev copy ...passed 00:09:47.845 Suite: bdevio tests on: Nvme2n3 00:09:47.845 Test: blockdev write read block ...passed 00:09:47.845 Test: blockdev write zeroes read block ...passed 00:09:47.845 Test: blockdev write zeroes read no split ...passed 00:09:48.105 Test: blockdev write zeroes read split ...passed 00:09:48.105 Test: blockdev write zeroes read split partial ...passed 00:09:48.105 Test: blockdev reset ...[2024-04-24 20:16:18.128323] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:48.105 [2024-04-24 20:16:18.132801] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:48.105 passed 00:09:48.105 Test: blockdev write read 8 blocks ...passed 00:09:48.105 Test: blockdev write read size > 128k ...passed 00:09:48.105 Test: blockdev write read invalid size ...passed 00:09:48.105 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:48.105 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:48.105 Test: blockdev write read max offset ...passed 00:09:48.105 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:48.105 Test: blockdev writev readv 8 blocks ...passed 00:09:48.105 Test: blockdev writev readv 30 x 1block ...passed 00:09:48.105 Test: blockdev writev readv block ...passed 00:09:48.105 Test: blockdev writev readv size > 128k ...passed 00:09:48.105 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:48.105 Test: blockdev comparev and writev ...[2024-04-24 20:16:18.143908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x265504000 len:0x1000 00:09:48.105 [2024-04-24 20:16:18.143996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:48.105 passed 00:09:48.105 Test: blockdev nvme passthru rw ...passed 00:09:48.105 Test: blockdev nvme passthru vendor specific ...passed 00:09:48.105 Test: blockdev nvme admin passthru ...[2024-04-24 20:16:18.145140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:48.105 [2024-04-24 20:16:18.145195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:48.105 passed 00:09:48.105 Test: blockdev copy ...passed 00:09:48.105 Suite: bdevio tests on: Nvme2n2 00:09:48.105 Test: blockdev write read block ...passed 00:09:48.105 Test: blockdev write zeroes read block ...passed 00:09:48.105 Test: blockdev write zeroes read no split ...passed 00:09:48.105 Test: blockdev write zeroes read split ...passed 00:09:48.105 Test: blockdev write zeroes read split partial ...passed 00:09:48.105 Test: blockdev reset ...[2024-04-24 20:16:18.234918] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:48.105 [2024-04-24 20:16:18.239474] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:48.105 passed 00:09:48.105 Test: blockdev write read 8 blocks ...passed 00:09:48.105 Test: blockdev write read size > 128k ...passed 00:09:48.105 Test: blockdev write read invalid size ...passed 00:09:48.105 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:48.105 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:48.105 Test: blockdev write read max offset ...passed 00:09:48.105 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:48.105 Test: blockdev writev readv 8 blocks ...passed 00:09:48.105 Test: blockdev writev readv 30 x 1block ...passed 00:09:48.105 Test: blockdev writev readv block ...passed 00:09:48.105 Test: blockdev writev readv size > 128k ...passed 00:09:48.105 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:48.105 Test: blockdev comparev and writev ...[2024-04-24 20:16:18.249276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x265504000 len:0x1000 00:09:48.105 [2024-04-24 20:16:18.249359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:48.105 passed 00:09:48.105 Test: blockdev nvme passthru rw ...passed 00:09:48.105 Test: blockdev nvme passthru vendor specific ...passed 00:09:48.105 Test: blockdev nvme admin passthru ...[2024-04-24 20:16:18.250503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:48.105 [2024-04-24 20:16:18.250553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:48.105 passed 00:09:48.105 Test: blockdev copy ...passed 00:09:48.105 Suite: bdevio tests on: Nvme2n1 00:09:48.105 Test: blockdev write read block ...passed 00:09:48.105 Test: blockdev write zeroes read block ...passed 00:09:48.105 Test: blockdev write zeroes read no split ...passed 00:09:48.105 Test: blockdev write zeroes read split ...passed 00:09:48.365 Test: blockdev write zeroes read split partial ...passed 00:09:48.365 Test: blockdev reset ...[2024-04-24 20:16:18.340760] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:09:48.365 [2024-04-24 20:16:18.345310] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:48.365 passed 00:09:48.365 Test: blockdev write read 8 blocks ...passed 00:09:48.365 Test: blockdev write read size > 128k ...passed 00:09:48.365 Test: blockdev write read invalid size ...passed 00:09:48.365 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:48.365 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:48.365 Test: blockdev write read max offset ...passed 00:09:48.365 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:48.365 Test: blockdev writev readv 8 blocks ...passed 00:09:48.365 Test: blockdev writev readv 30 x 1block ...passed 00:09:48.365 Test: blockdev writev readv block ...passed 00:09:48.365 Test: blockdev writev readv size > 128k ...passed 00:09:48.365 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:48.365 Test: blockdev comparev and writev ...[2024-04-24 20:16:18.356908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29503c000 len:0x1000 00:09:48.365 [2024-04-24 20:16:18.357252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:48.365 passed 00:09:48.365 Test: blockdev nvme passthru rw ...passed 00:09:48.365 Test: blockdev nvme passthru vendor specific ...[2024-04-24 20:16:18.358730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:48.365 [2024-04-24 20:16:18.359001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:09:48.365 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:09:48.365 passed 00:09:48.365 Test: blockdev copy ...passed 00:09:48.365 Suite: bdevio tests on: Nvme1n1 00:09:48.365 Test: blockdev write read block ...passed 00:09:48.365 Test: blockdev write zeroes read block ...passed 00:09:48.365 Test: blockdev write zeroes read no split ...passed 00:09:48.365 Test: blockdev write zeroes read split ...passed 00:09:48.365 Test: blockdev write zeroes read split partial ...passed 00:09:48.365 Test: blockdev reset ...[2024-04-24 20:16:18.448034] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:09:48.365 [2024-04-24 20:16:18.452212] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:48.365 passed 00:09:48.365 Test: blockdev write read 8 blocks ...passed 00:09:48.365 Test: blockdev write read size > 128k ...passed 00:09:48.365 Test: blockdev write read invalid size ...passed 00:09:48.365 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:48.365 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:48.365 Test: blockdev write read max offset ...passed 00:09:48.365 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:48.365 Test: blockdev writev readv 8 blocks ...passed 00:09:48.365 Test: blockdev writev readv 30 x 1block ...passed 00:09:48.365 Test: blockdev writev readv block ...passed 00:09:48.365 Test: blockdev writev readv size > 128k ...passed 00:09:48.365 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:48.365 Test: blockdev comparev and writev ...[2024-04-24 20:16:18.464565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x295038000 len:0x1000 00:09:48.365 [2024-04-24 20:16:18.464919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:48.365 passed 00:09:48.365 Test: blockdev nvme passthru rw ...passed 00:09:48.365 Test: blockdev nvme passthru vendor specific ...[2024-04-24 20:16:18.466670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:48.365 [2024-04-24 20:16:18.466883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:09:48.365 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:09:48.365 passed 00:09:48.365 Test: blockdev copy ...passed 00:09:48.365 Suite: bdevio tests on: Nvme0n1p2 00:09:48.365 Test: blockdev write read block ...passed 00:09:48.365 Test: blockdev write zeroes read block ...passed 00:09:48.365 Test: blockdev write zeroes read no split ...passed 00:09:48.365 Test: blockdev write zeroes read split ...passed 00:09:48.365 Test: blockdev write zeroes read split partial ...passed 00:09:48.365 Test: blockdev reset ...[2024-04-24 20:16:18.560716] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:48.365 [2024-04-24 20:16:18.564893] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:48.365 passed 00:09:48.365 Test: blockdev write read 8 blocks ...passed 00:09:48.365 Test: blockdev write read size > 128k ...passed 00:09:48.365 Test: blockdev write read invalid size ...passed 00:09:48.365 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:48.365 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:48.365 Test: blockdev write read max offset ...passed 00:09:48.365 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:48.365 Test: blockdev writev readv 8 blocks ...passed 00:09:48.365 Test: blockdev writev readv 30 x 1block ...passed 00:09:48.365 Test: blockdev writev readv block ...passed 00:09:48.365 Test: blockdev writev readv size > 128k ...passed 00:09:48.365 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:48.365 Test: blockdev comparev and writev ...passed[2024-04-24 20:16:18.576338] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:48.365 separate metadata which is not supported yet. 00:09:48.365 00:09:48.365 Test: blockdev nvme passthru rw ...passed 00:09:48.365 Test: blockdev nvme passthru vendor specific ...passed 00:09:48.365 Test: blockdev nvme admin passthru ...passed 00:09:48.365 Test: blockdev copy ...passed 00:09:48.365 Suite: bdevio tests on: Nvme0n1p1 00:09:48.365 Test: blockdev write read block ...passed 00:09:48.365 Test: blockdev write zeroes read block ...passed 00:09:48.365 Test: blockdev write zeroes read no split ...passed 00:09:48.626 Test: blockdev write zeroes read split ...passed 00:09:48.626 Test: blockdev write zeroes read split partial ...passed 00:09:48.626 Test: blockdev reset ...[2024-04-24 20:16:18.661064] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:48.626 [2024-04-24 20:16:18.665427] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:48.626 passed 00:09:48.626 Test: blockdev write read 8 blocks ...passed 00:09:48.626 Test: blockdev write read size > 128k ...passed 00:09:48.626 Test: blockdev write read invalid size ...passed 00:09:48.626 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:48.626 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:48.626 Test: blockdev write read max offset ...passed 00:09:48.626 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:48.626 Test: blockdev writev readv 8 blocks ...passed 00:09:48.626 Test: blockdev writev readv 30 x 1block ...passed 00:09:48.626 Test: blockdev writev readv block ...passed 00:09:48.626 Test: blockdev writev readv size > 128k ...passed 00:09:48.626 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:48.626 Test: blockdev comparev and writev ...[2024-04-24 20:16:18.675776] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:48.626 separate metadata which is not supported yet. 00:09:48.626 passed 00:09:48.626 Test: blockdev nvme passthru rw ...passed 00:09:48.626 Test: blockdev nvme passthru vendor specific ...passed 00:09:48.626 Test: blockdev nvme admin passthru ...passed 00:09:48.626 Test: blockdev copy ...passed 00:09:48.626 00:09:48.626 Run Summary: Type Total Ran Passed Failed Inactive 00:09:48.626 suites 7 7 n/a 0 0 00:09:48.626 tests 161 161 161 0 0 00:09:48.626 asserts 1006 1006 1006 0 n/a 00:09:48.626 00:09:48.626 Elapsed time = 2.060 seconds 00:09:48.626 0 00:09:48.626 20:16:18 -- bdev/blockdev.sh@295 -- # killprocess 68109 00:09:48.626 20:16:18 -- common/autotest_common.sh@936 -- # '[' -z 68109 ']' 00:09:48.626 20:16:18 -- common/autotest_common.sh@940 -- # kill -0 68109 00:09:48.626 20:16:18 -- common/autotest_common.sh@941 -- # uname 00:09:48.626 20:16:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:48.626 20:16:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68109 00:09:48.626 20:16:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:48.626 20:16:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:48.626 20:16:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68109' 00:09:48.626 killing process with pid 68109 00:09:48.626 20:16:18 -- common/autotest_common.sh@955 -- # kill 68109 00:09:48.626 20:16:18 -- common/autotest_common.sh@960 -- # wait 68109 00:09:50.002 20:16:19 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:09:50.002 00:09:50.002 real 0m3.444s 00:09:50.002 user 0m8.427s 00:09:50.002 sys 0m0.489s 00:09:50.002 20:16:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:50.002 20:16:19 -- common/autotest_common.sh@10 -- # set +x 00:09:50.002 ************************************ 00:09:50.002 END TEST bdev_bounds 00:09:50.002 ************************************ 00:09:50.002 20:16:19 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:50.002 20:16:19 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:09:50.002 20:16:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:50.002 20:16:19 -- common/autotest_common.sh@10 -- # set +x 00:09:50.002 ************************************ 00:09:50.002 START TEST bdev_nbd 00:09:50.002 ************************************ 00:09:50.002 20:16:20 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:50.002 20:16:20 -- bdev/blockdev.sh@300 -- # uname -s 00:09:50.002 20:16:20 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:09:50.002 20:16:20 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.002 20:16:20 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:50.002 20:16:20 -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:50.002 20:16:20 -- bdev/blockdev.sh@304 -- # local bdev_all 00:09:50.002 20:16:20 -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:09:50.002 20:16:20 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:09:50.002 20:16:20 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:50.002 20:16:20 -- bdev/blockdev.sh@311 -- # local nbd_all 00:09:50.002 20:16:20 -- bdev/blockdev.sh@312 -- # bdev_num=7 00:09:50.002 20:16:20 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:50.003 20:16:20 -- bdev/blockdev.sh@314 -- # local nbd_list 00:09:50.003 20:16:20 -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:50.003 20:16:20 -- bdev/blockdev.sh@315 -- # local bdev_list 00:09:50.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:50.003 20:16:20 -- bdev/blockdev.sh@318 -- # nbd_pid=68184 00:09:50.003 20:16:20 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:50.003 20:16:20 -- bdev/blockdev.sh@320 -- # waitforlisten 68184 /var/tmp/spdk-nbd.sock 00:09:50.003 20:16:20 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:50.003 20:16:20 -- common/autotest_common.sh@817 -- # '[' -z 68184 ']' 00:09:50.003 20:16:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:50.003 20:16:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:50.003 20:16:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:50.003 20:16:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:50.003 20:16:20 -- common/autotest_common.sh@10 -- # set +x 00:09:50.003 [2024-04-24 20:16:20.122238] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:09:50.003 [2024-04-24 20:16:20.122777] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.261 [2024-04-24 20:16:20.296489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.519 [2024-04-24 20:16:20.549344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.087 20:16:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:51.087 20:16:21 -- common/autotest_common.sh@850 -- # return 0 00:09:51.087 20:16:21 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:51.087 20:16:21 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.087 20:16:21 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:51.087 20:16:21 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:51.087 20:16:21 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:51.087 20:16:21 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.087 20:16:21 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:51.087 20:16:21 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:51.087 20:16:21 -- bdev/nbd_common.sh@24 -- # local i 00:09:51.087 20:16:21 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:51.087 20:16:21 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:51.087 20:16:21 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:51.087 20:16:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:51.399 20:16:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:51.399 20:16:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:51.399 20:16:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:51.399 20:16:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:09:51.399 20:16:21 -- common/autotest_common.sh@855 -- # local i 00:09:51.399 20:16:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:51.399 20:16:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:51.399 20:16:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:09:51.399 20:16:21 -- common/autotest_common.sh@859 -- # break 00:09:51.399 20:16:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:51.399 20:16:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:51.399 20:16:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:51.399 1+0 records in 00:09:51.399 1+0 records out 00:09:51.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567367 s, 7.2 MB/s 00:09:51.399 20:16:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.399 20:16:21 -- common/autotest_common.sh@872 -- # size=4096 00:09:51.399 20:16:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.399 20:16:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:51.399 20:16:21 -- common/autotest_common.sh@875 -- # return 0 00:09:51.399 20:16:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:51.399 20:16:21 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:51.399 20:16:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:51.658 20:16:21 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:51.658 20:16:21 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:51.658 20:16:21 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:51.658 20:16:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:09:51.658 20:16:21 -- common/autotest_common.sh@855 -- # local i 00:09:51.658 20:16:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:51.658 20:16:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:51.658 20:16:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:09:51.658 20:16:21 -- common/autotest_common.sh@859 -- # break 00:09:51.658 20:16:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:51.658 20:16:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:51.658 20:16:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:51.658 1+0 records in 00:09:51.658 1+0 records out 00:09:51.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00148429 s, 2.8 MB/s 00:09:51.658 20:16:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.658 20:16:21 -- common/autotest_common.sh@872 -- # size=4096 00:09:51.658 20:16:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.658 20:16:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:51.658 20:16:21 -- common/autotest_common.sh@875 -- # return 0 00:09:51.658 20:16:21 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:51.658 20:16:21 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:51.658 20:16:21 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:51.917 20:16:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:51.917 20:16:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:51.917 20:16:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:51.917 20:16:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:09:51.917 20:16:22 -- common/autotest_common.sh@855 -- # local i 00:09:51.917 20:16:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:51.917 20:16:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:51.917 20:16:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:09:51.917 20:16:22 -- common/autotest_common.sh@859 -- # break 00:09:51.917 20:16:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:51.917 20:16:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:51.917 20:16:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:51.917 1+0 records in 00:09:51.917 1+0 records out 00:09:51.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706213 s, 5.8 MB/s 00:09:51.917 20:16:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.917 20:16:22 -- common/autotest_common.sh@872 -- # size=4096 00:09:51.917 20:16:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.917 20:16:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:51.917 20:16:22 -- common/autotest_common.sh@875 -- # return 0 00:09:51.917 20:16:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:51.917 20:16:22 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:51.917 20:16:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:52.175 20:16:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:52.175 20:16:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:52.175 20:16:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:52.175 20:16:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:09:52.175 20:16:22 -- common/autotest_common.sh@855 -- # local i 00:09:52.175 20:16:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:52.175 20:16:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:52.175 20:16:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:09:52.175 20:16:22 -- common/autotest_common.sh@859 -- # break 00:09:52.175 20:16:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:52.175 20:16:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:52.175 20:16:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.175 1+0 records in 00:09:52.175 1+0 records out 00:09:52.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000697972 s, 5.9 MB/s 00:09:52.175 20:16:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.175 20:16:22 -- common/autotest_common.sh@872 -- # size=4096 00:09:52.175 20:16:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.175 20:16:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:52.175 20:16:22 -- common/autotest_common.sh@875 -- # return 0 00:09:52.175 20:16:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:52.175 20:16:22 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:52.175 20:16:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:52.434 20:16:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:52.434 20:16:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:52.434 20:16:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:52.434 20:16:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:09:52.435 20:16:22 -- common/autotest_common.sh@855 -- # local i 00:09:52.435 20:16:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:52.435 20:16:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:52.435 20:16:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:09:52.435 20:16:22 -- common/autotest_common.sh@859 -- # break 00:09:52.435 20:16:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:52.435 20:16:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:52.435 20:16:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.435 1+0 records in 00:09:52.435 1+0 records out 00:09:52.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078135 s, 5.2 MB/s 00:09:52.435 20:16:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.435 20:16:22 -- common/autotest_common.sh@872 -- # size=4096 00:09:52.435 20:16:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.435 20:16:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:52.435 20:16:22 -- common/autotest_common.sh@875 -- # return 0 00:09:52.435 20:16:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:52.435 20:16:22 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:52.435 20:16:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:52.694 20:16:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:52.694 20:16:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:52.694 20:16:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:52.694 20:16:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:09:52.694 20:16:22 -- common/autotest_common.sh@855 -- # local i 00:09:52.694 20:16:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:52.694 20:16:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:52.694 20:16:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:09:52.694 20:16:22 -- common/autotest_common.sh@859 -- # break 00:09:52.694 20:16:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:52.694 20:16:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:52.694 20:16:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.694 1+0 records in 00:09:52.694 1+0 records out 00:09:52.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000785156 s, 5.2 MB/s 00:09:52.694 20:16:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.694 20:16:22 -- common/autotest_common.sh@872 -- # size=4096 00:09:52.694 20:16:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.694 20:16:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:52.694 20:16:22 -- common/autotest_common.sh@875 -- # return 0 00:09:52.694 20:16:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:52.694 20:16:22 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:52.694 20:16:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:52.954 20:16:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:52.954 20:16:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:52.954 20:16:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:52.954 20:16:23 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:09:52.954 20:16:23 -- common/autotest_common.sh@855 -- # local i 00:09:52.954 20:16:23 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:52.954 20:16:23 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:52.954 20:16:23 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:09:52.954 20:16:23 -- common/autotest_common.sh@859 -- # break 00:09:52.954 20:16:23 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:52.954 20:16:23 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:52.954 20:16:23 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.954 1+0 records in 00:09:52.954 1+0 records out 00:09:52.954 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000909839 s, 4.5 MB/s 00:09:52.954 20:16:23 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.954 20:16:23 -- common/autotest_common.sh@872 -- # size=4096 00:09:52.954 20:16:23 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.954 20:16:23 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:52.954 20:16:23 -- common/autotest_common.sh@875 -- # return 0 00:09:52.954 20:16:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:52.954 20:16:23 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:52.954 20:16:23 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:53.213 20:16:23 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:53.213 { 00:09:53.213 "nbd_device": "/dev/nbd0", 00:09:53.213 "bdev_name": "Nvme0n1p1" 00:09:53.213 }, 00:09:53.213 { 00:09:53.213 "nbd_device": "/dev/nbd1", 00:09:53.213 "bdev_name": "Nvme0n1p2" 00:09:53.213 }, 00:09:53.213 { 00:09:53.213 "nbd_device": "/dev/nbd2", 00:09:53.213 "bdev_name": "Nvme1n1" 00:09:53.213 }, 00:09:53.213 { 00:09:53.213 "nbd_device": "/dev/nbd3", 00:09:53.213 "bdev_name": "Nvme2n1" 00:09:53.213 }, 00:09:53.213 { 00:09:53.213 "nbd_device": "/dev/nbd4", 00:09:53.213 "bdev_name": "Nvme2n2" 00:09:53.213 }, 00:09:53.213 { 00:09:53.213 "nbd_device": "/dev/nbd5", 00:09:53.213 "bdev_name": "Nvme2n3" 00:09:53.213 }, 00:09:53.213 { 00:09:53.213 "nbd_device": "/dev/nbd6", 00:09:53.213 "bdev_name": "Nvme3n1" 00:09:53.213 } 00:09:53.213 ]' 00:09:53.213 20:16:23 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:53.213 20:16:23 -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:53.213 { 00:09:53.213 "nbd_device": "/dev/nbd0", 00:09:53.213 "bdev_name": "Nvme0n1p1" 00:09:53.213 }, 00:09:53.213 { 00:09:53.213 "nbd_device": "/dev/nbd1", 00:09:53.213 "bdev_name": "Nvme0n1p2" 00:09:53.213 }, 00:09:53.213 { 00:09:53.213 "nbd_device": "/dev/nbd2", 00:09:53.213 "bdev_name": "Nvme1n1" 00:09:53.213 }, 00:09:53.213 { 00:09:53.213 "nbd_device": "/dev/nbd3", 00:09:53.213 "bdev_name": "Nvme2n1" 00:09:53.213 }, 00:09:53.213 { 00:09:53.213 "nbd_device": "/dev/nbd4", 00:09:53.213 "bdev_name": "Nvme2n2" 00:09:53.213 }, 00:09:53.213 { 00:09:53.213 "nbd_device": "/dev/nbd5", 00:09:53.213 "bdev_name": "Nvme2n3" 00:09:53.213 }, 00:09:53.213 { 00:09:53.213 "nbd_device": "/dev/nbd6", 00:09:53.213 "bdev_name": "Nvme3n1" 00:09:53.213 } 00:09:53.213 ]' 00:09:53.213 20:16:23 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:53.213 20:16:23 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:53.213 20:16:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.213 20:16:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:53.213 20:16:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:53.213 20:16:23 -- bdev/nbd_common.sh@51 -- # local i 00:09:53.213 20:16:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.213 20:16:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@41 -- # break 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@41 -- # break 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.473 20:16:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:53.732 20:16:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:53.732 20:16:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:53.732 20:16:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:53.732 20:16:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.732 20:16:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.732 20:16:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:53.732 20:16:23 -- bdev/nbd_common.sh@41 -- # break 00:09:53.732 20:16:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.732 20:16:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.732 20:16:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:53.991 20:16:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:53.991 20:16:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:53.991 20:16:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:53.991 20:16:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.991 20:16:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.991 20:16:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:53.991 20:16:24 -- bdev/nbd_common.sh@41 -- # break 00:09:53.991 20:16:24 -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.991 20:16:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.991 20:16:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@41 -- # break 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.250 20:16:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@41 -- # break 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@41 -- # break 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.510 20:16:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@65 -- # true 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@65 -- # count=0 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@122 -- # count=0 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@127 -- # return 0 00:09:54.770 20:16:24 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@12 -- # local i 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:54.770 20:16:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:55.029 /dev/nbd0 00:09:55.029 20:16:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:55.029 20:16:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:55.029 20:16:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:09:55.029 20:16:25 -- common/autotest_common.sh@855 -- # local i 00:09:55.029 20:16:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:55.029 20:16:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:55.029 20:16:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:09:55.029 20:16:25 -- common/autotest_common.sh@859 -- # break 00:09:55.029 20:16:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:55.029 20:16:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:55.029 20:16:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.029 1+0 records in 00:09:55.029 1+0 records out 00:09:55.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000731414 s, 5.6 MB/s 00:09:55.029 20:16:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.029 20:16:25 -- common/autotest_common.sh@872 -- # size=4096 00:09:55.029 20:16:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.029 20:16:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:55.029 20:16:25 -- common/autotest_common.sh@875 -- # return 0 00:09:55.029 20:16:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.029 20:16:25 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:55.029 20:16:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:55.287 /dev/nbd1 00:09:55.287 20:16:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:55.287 20:16:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:55.287 20:16:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:09:55.287 20:16:25 -- common/autotest_common.sh@855 -- # local i 00:09:55.287 20:16:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:55.287 20:16:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:55.287 20:16:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:09:55.287 20:16:25 -- common/autotest_common.sh@859 -- # break 00:09:55.287 20:16:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:55.288 20:16:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:55.288 20:16:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.288 1+0 records in 00:09:55.288 1+0 records out 00:09:55.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594098 s, 6.9 MB/s 00:09:55.288 20:16:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.288 20:16:25 -- common/autotest_common.sh@872 -- # size=4096 00:09:55.288 20:16:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.288 20:16:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:55.288 20:16:25 -- common/autotest_common.sh@875 -- # return 0 00:09:55.288 20:16:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.288 20:16:25 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:55.288 20:16:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:55.560 /dev/nbd10 00:09:55.560 20:16:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:55.560 20:16:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:55.560 20:16:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:09:55.560 20:16:25 -- common/autotest_common.sh@855 -- # local i 00:09:55.560 20:16:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:55.560 20:16:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:55.560 20:16:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:09:55.560 20:16:25 -- common/autotest_common.sh@859 -- # break 00:09:55.560 20:16:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:55.560 20:16:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:55.560 20:16:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.560 1+0 records in 00:09:55.560 1+0 records out 00:09:55.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597539 s, 6.9 MB/s 00:09:55.560 20:16:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.560 20:16:25 -- common/autotest_common.sh@872 -- # size=4096 00:09:55.560 20:16:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.560 20:16:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:55.560 20:16:25 -- common/autotest_common.sh@875 -- # return 0 00:09:55.560 20:16:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.560 20:16:25 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:55.560 20:16:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:55.825 /dev/nbd11 00:09:55.825 20:16:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:55.825 20:16:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:55.825 20:16:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:09:55.825 20:16:25 -- common/autotest_common.sh@855 -- # local i 00:09:55.825 20:16:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:55.825 20:16:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:55.825 20:16:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:09:55.825 20:16:25 -- common/autotest_common.sh@859 -- # break 00:09:55.825 20:16:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:55.825 20:16:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:55.825 20:16:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:55.825 1+0 records in 00:09:55.825 1+0 records out 00:09:55.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000822432 s, 5.0 MB/s 00:09:55.825 20:16:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.825 20:16:25 -- common/autotest_common.sh@872 -- # size=4096 00:09:55.825 20:16:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:55.825 20:16:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:55.825 20:16:25 -- common/autotest_common.sh@875 -- # return 0 00:09:55.825 20:16:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:55.825 20:16:25 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:55.825 20:16:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:56.082 /dev/nbd12 00:09:56.083 20:16:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:56.083 20:16:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:56.083 20:16:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:09:56.083 20:16:26 -- common/autotest_common.sh@855 -- # local i 00:09:56.083 20:16:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:56.083 20:16:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:56.083 20:16:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:09:56.083 20:16:26 -- common/autotest_common.sh@859 -- # break 00:09:56.083 20:16:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:56.083 20:16:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:56.083 20:16:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:56.083 1+0 records in 00:09:56.083 1+0 records out 00:09:56.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000774803 s, 5.3 MB/s 00:09:56.083 20:16:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.083 20:16:26 -- common/autotest_common.sh@872 -- # size=4096 00:09:56.083 20:16:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.083 20:16:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:56.083 20:16:26 -- common/autotest_common.sh@875 -- # return 0 00:09:56.083 20:16:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:56.083 20:16:26 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:56.083 20:16:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:56.083 /dev/nbd13 00:09:56.341 20:16:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:56.341 20:16:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:56.341 20:16:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:09:56.341 20:16:26 -- common/autotest_common.sh@855 -- # local i 00:09:56.341 20:16:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:56.341 20:16:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:56.341 20:16:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:09:56.341 20:16:26 -- common/autotest_common.sh@859 -- # break 00:09:56.341 20:16:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:56.341 20:16:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:56.341 20:16:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:56.341 1+0 records in 00:09:56.341 1+0 records out 00:09:56.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627528 s, 6.5 MB/s 00:09:56.341 20:16:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.341 20:16:26 -- common/autotest_common.sh@872 -- # size=4096 00:09:56.341 20:16:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.341 20:16:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:56.341 20:16:26 -- common/autotest_common.sh@875 -- # return 0 00:09:56.341 20:16:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:56.341 20:16:26 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:56.341 20:16:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:56.341 /dev/nbd14 00:09:56.341 20:16:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:56.341 20:16:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:56.341 20:16:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:09:56.342 20:16:26 -- common/autotest_common.sh@855 -- # local i 00:09:56.342 20:16:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:09:56.342 20:16:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:09:56.342 20:16:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:09:56.342 20:16:26 -- common/autotest_common.sh@859 -- # break 00:09:56.342 20:16:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:09:56.342 20:16:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:09:56.342 20:16:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:56.601 1+0 records in 00:09:56.601 1+0 records out 00:09:56.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000997774 s, 4.1 MB/s 00:09:56.601 20:16:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.601 20:16:26 -- common/autotest_common.sh@872 -- # size=4096 00:09:56.601 20:16:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:56.601 20:16:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:09:56.601 20:16:26 -- common/autotest_common.sh@875 -- # return 0 00:09:56.601 20:16:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:56.601 20:16:26 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:56.601 20:16:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:56.601 20:16:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:56.601 20:16:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:56.601 20:16:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:56.601 { 00:09:56.601 "nbd_device": "/dev/nbd0", 00:09:56.601 "bdev_name": "Nvme0n1p1" 00:09:56.601 }, 00:09:56.601 { 00:09:56.601 "nbd_device": "/dev/nbd1", 00:09:56.601 "bdev_name": "Nvme0n1p2" 00:09:56.601 }, 00:09:56.601 { 00:09:56.601 "nbd_device": "/dev/nbd10", 00:09:56.601 "bdev_name": "Nvme1n1" 00:09:56.601 }, 00:09:56.601 { 00:09:56.601 "nbd_device": "/dev/nbd11", 00:09:56.601 "bdev_name": "Nvme2n1" 00:09:56.601 }, 00:09:56.601 { 00:09:56.601 "nbd_device": "/dev/nbd12", 00:09:56.601 "bdev_name": "Nvme2n2" 00:09:56.601 }, 00:09:56.601 { 00:09:56.601 "nbd_device": "/dev/nbd13", 00:09:56.601 "bdev_name": "Nvme2n3" 00:09:56.601 }, 00:09:56.601 { 00:09:56.601 "nbd_device": "/dev/nbd14", 00:09:56.601 "bdev_name": "Nvme3n1" 00:09:56.601 } 00:09:56.601 ]' 00:09:56.601 20:16:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:56.601 20:16:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:56.601 { 00:09:56.601 "nbd_device": "/dev/nbd0", 00:09:56.601 "bdev_name": "Nvme0n1p1" 00:09:56.601 }, 00:09:56.601 { 00:09:56.601 "nbd_device": "/dev/nbd1", 00:09:56.601 "bdev_name": "Nvme0n1p2" 00:09:56.601 }, 00:09:56.601 { 00:09:56.601 "nbd_device": "/dev/nbd10", 00:09:56.601 "bdev_name": "Nvme1n1" 00:09:56.601 }, 00:09:56.601 { 00:09:56.601 "nbd_device": "/dev/nbd11", 00:09:56.601 "bdev_name": "Nvme2n1" 00:09:56.601 }, 00:09:56.601 { 00:09:56.601 "nbd_device": "/dev/nbd12", 00:09:56.601 "bdev_name": "Nvme2n2" 00:09:56.601 }, 00:09:56.601 { 00:09:56.601 "nbd_device": "/dev/nbd13", 00:09:56.601 "bdev_name": "Nvme2n3" 00:09:56.601 }, 00:09:56.601 { 00:09:56.601 "nbd_device": "/dev/nbd14", 00:09:56.601 "bdev_name": "Nvme3n1" 00:09:56.601 } 00:09:56.601 ]' 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:56.861 /dev/nbd1 00:09:56.861 /dev/nbd10 00:09:56.861 /dev/nbd11 00:09:56.861 /dev/nbd12 00:09:56.861 /dev/nbd13 00:09:56.861 /dev/nbd14' 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:56.861 /dev/nbd1 00:09:56.861 /dev/nbd10 00:09:56.861 /dev/nbd11 00:09:56.861 /dev/nbd12 00:09:56.861 /dev/nbd13 00:09:56.861 /dev/nbd14' 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@65 -- # count=7 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@66 -- # echo 7 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@95 -- # count=7 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:56.861 256+0 records in 00:09:56.861 256+0 records out 00:09:56.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145476 s, 72.1 MB/s 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:56.861 20:16:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:56.861 256+0 records in 00:09:56.861 256+0 records out 00:09:56.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133878 s, 7.8 MB/s 00:09:56.861 20:16:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:56.861 20:16:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:57.120 256+0 records in 00:09:57.120 256+0 records out 00:09:57.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135664 s, 7.7 MB/s 00:09:57.120 20:16:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:57.120 20:16:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:57.120 256+0 records in 00:09:57.120 256+0 records out 00:09:57.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136386 s, 7.7 MB/s 00:09:57.120 20:16:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:57.120 20:16:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:57.379 256+0 records in 00:09:57.379 256+0 records out 00:09:57.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13348 s, 7.9 MB/s 00:09:57.379 20:16:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:57.379 20:16:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:57.379 256+0 records in 00:09:57.379 256+0 records out 00:09:57.379 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13517 s, 7.8 MB/s 00:09:57.379 20:16:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:57.379 20:16:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:57.637 256+0 records in 00:09:57.637 256+0 records out 00:09:57.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139065 s, 7.5 MB/s 00:09:57.637 20:16:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:57.637 20:16:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:57.637 256+0 records in 00:09:57.637 256+0 records out 00:09:57.637 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139757 s, 7.5 MB/s 00:09:57.637 20:16:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:57.638 20:16:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:57.638 20:16:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:57.638 20:16:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:57.638 20:16:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:57.638 20:16:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:57.638 20:16:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:57.638 20:16:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:57.638 20:16:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@51 -- # local i 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:57.896 20:16:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:58.155 20:16:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:58.155 20:16:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:58.155 20:16:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:58.155 20:16:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:58.155 20:16:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:58.155 20:16:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:58.155 20:16:28 -- bdev/nbd_common.sh@41 -- # break 00:09:58.155 20:16:28 -- bdev/nbd_common.sh@45 -- # return 0 00:09:58.155 20:16:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.155 20:16:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:58.155 20:16:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@41 -- # break 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@45 -- # return 0 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@41 -- # break 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@45 -- # return 0 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.415 20:16:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:58.673 20:16:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:58.673 20:16:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:58.673 20:16:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:58.673 20:16:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:58.673 20:16:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:58.673 20:16:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:58.673 20:16:28 -- bdev/nbd_common.sh@41 -- # break 00:09:58.673 20:16:28 -- bdev/nbd_common.sh@45 -- # return 0 00:09:58.673 20:16:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.674 20:16:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:58.932 20:16:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:58.932 20:16:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:58.932 20:16:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:58.932 20:16:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:58.932 20:16:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:58.932 20:16:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:58.932 20:16:29 -- bdev/nbd_common.sh@41 -- # break 00:09:58.932 20:16:29 -- bdev/nbd_common.sh@45 -- # return 0 00:09:58.932 20:16:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:58.932 20:16:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:59.191 20:16:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:59.192 20:16:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:59.192 20:16:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:59.192 20:16:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.192 20:16:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.192 20:16:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:59.192 20:16:29 -- bdev/nbd_common.sh@41 -- # break 00:09:59.192 20:16:29 -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.192 20:16:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:59.192 20:16:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:59.451 20:16:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:59.451 20:16:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:59.451 20:16:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:59.451 20:16:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:59.451 20:16:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:59.451 20:16:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:59.451 20:16:29 -- bdev/nbd_common.sh@41 -- # break 00:09:59.451 20:16:29 -- bdev/nbd_common.sh@45 -- # return 0 00:09:59.451 20:16:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:59.451 20:16:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.451 20:16:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@65 -- # true 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@65 -- # count=0 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@104 -- # count=0 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@109 -- # return 0 00:09:59.711 20:16:29 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:59.711 20:16:29 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:59.970 malloc_lvol_verify 00:09:59.970 20:16:30 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:00.229 cc9ac8d6-ff32-47bf-ab8e-d6ae71b9f02d 00:10:00.229 20:16:30 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:00.229 cadf80f8-3c1d-4712-b736-74514dd20a1f 00:10:00.229 20:16:30 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:00.488 /dev/nbd0 00:10:00.488 20:16:30 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:10:00.488 mke2fs 1.46.5 (30-Dec-2021) 00:10:00.488 Discarding device blocks: 0/4096 done 00:10:00.488 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:00.488 00:10:00.488 Allocating group tables: 0/1 done 00:10:00.488 Writing inode tables: 0/1 done 00:10:00.488 Creating journal (1024 blocks): done 00:10:00.488 Writing superblocks and filesystem accounting information: 0/1 done 00:10:00.488 00:10:00.488 20:16:30 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:10:00.488 20:16:30 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:00.488 20:16:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:00.488 20:16:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:00.488 20:16:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:00.488 20:16:30 -- bdev/nbd_common.sh@51 -- # local i 00:10:00.488 20:16:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:00.488 20:16:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:00.747 20:16:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:00.747 20:16:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:00.747 20:16:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:00.747 20:16:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:00.747 20:16:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:00.747 20:16:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:00.747 20:16:30 -- bdev/nbd_common.sh@41 -- # break 00:10:00.747 20:16:30 -- bdev/nbd_common.sh@45 -- # return 0 00:10:00.747 20:16:30 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:10:00.747 20:16:30 -- bdev/nbd_common.sh@147 -- # return 0 00:10:00.747 20:16:30 -- bdev/blockdev.sh@326 -- # killprocess 68184 00:10:00.747 20:16:30 -- common/autotest_common.sh@936 -- # '[' -z 68184 ']' 00:10:00.747 20:16:30 -- common/autotest_common.sh@940 -- # kill -0 68184 00:10:00.747 20:16:30 -- common/autotest_common.sh@941 -- # uname 00:10:00.747 20:16:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:00.747 20:16:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68184 00:10:00.747 20:16:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:00.747 20:16:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:00.747 killing process with pid 68184 00:10:00.747 20:16:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68184' 00:10:00.747 20:16:30 -- common/autotest_common.sh@955 -- # kill 68184 00:10:00.747 20:16:30 -- common/autotest_common.sh@960 -- # wait 68184 00:10:02.125 20:16:32 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:10:02.125 00:10:02.125 real 0m12.186s 00:10:02.125 user 0m15.716s 00:10:02.125 sys 0m4.885s 00:10:02.125 20:16:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:02.125 20:16:32 -- common/autotest_common.sh@10 -- # set +x 00:10:02.125 ************************************ 00:10:02.125 END TEST bdev_nbd 00:10:02.125 ************************************ 00:10:02.125 20:16:32 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:10:02.125 20:16:32 -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:10:02.125 20:16:32 -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:10:02.125 skipping fio tests on NVMe due to multi-ns failures. 00:10:02.125 20:16:32 -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:02.125 20:16:32 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:02.125 20:16:32 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:02.125 20:16:32 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:10:02.125 20:16:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:02.125 20:16:32 -- common/autotest_common.sh@10 -- # set +x 00:10:02.384 ************************************ 00:10:02.384 START TEST bdev_verify 00:10:02.384 ************************************ 00:10:02.384 20:16:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:02.384 [2024-04-24 20:16:32.462564] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:10:02.384 [2024-04-24 20:16:32.462682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68614 ] 00:10:02.643 [2024-04-24 20:16:32.636207] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:02.901 [2024-04-24 20:16:32.882298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.901 [2024-04-24 20:16:32.882343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.467 Running I/O for 5 seconds... 00:10:08.743 00:10:08.743 Latency(us) 00:10:08.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.743 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.743 Verification LBA range: start 0x0 length 0x5e800 00:10:08.743 Nvme0n1p1 : 5.07 1389.01 5.43 0.00 0.00 91690.82 18950.17 89276.35 00:10:08.743 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.743 Verification LBA range: start 0x5e800 length 0x5e800 00:10:08.743 Nvme0n1p1 : 5.10 1431.18 5.59 0.00 0.00 88505.69 14739.02 80854.05 00:10:08.743 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.743 Verification LBA range: start 0x0 length 0x5e7ff 00:10:08.743 Nvme0n1p2 : 5.07 1388.59 5.42 0.00 0.00 91589.36 20318.79 87591.89 00:10:08.743 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.743 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:10:08.743 Nvme0n1p2 : 5.10 1430.74 5.59 0.00 0.00 88380.47 13686.23 80432.94 00:10:08.743 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.743 Verification LBA range: start 0x0 length 0xa0000 00:10:08.743 Nvme1n1 : 5.08 1397.05 5.46 0.00 0.00 91022.48 7211.59 85907.43 00:10:08.743 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.743 Verification LBA range: start 0xa0000 length 0xa0000 00:10:08.743 Nvme1n1 : 5.04 1422.62 5.56 0.00 0.00 89668.91 20108.23 78748.48 00:10:08.743 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.743 Verification LBA range: start 0x0 length 0x80000 00:10:08.743 Nvme2n1 : 5.09 1396.51 5.46 0.00 0.00 90846.96 7632.71 85065.20 00:10:08.743 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.743 Verification LBA range: start 0x80000 length 0x80000 00:10:08.743 Nvme2n1 : 5.07 1425.62 5.57 0.00 0.00 89244.82 10791.07 77064.02 00:10:08.743 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.743 Verification LBA range: start 0x0 length 0x80000 00:10:08.743 Nvme2n2 : 5.09 1396.08 5.45 0.00 0.00 90718.30 7632.71 84222.97 00:10:08.743 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.743 Verification LBA range: start 0x80000 length 0x80000 00:10:08.743 Nvme2n2 : 5.07 1425.16 5.57 0.00 0.00 89046.23 10264.67 77064.02 00:10:08.743 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.743 Verification LBA range: start 0x0 length 0x80000 00:10:08.743 Nvme2n3 : 5.09 1395.66 5.45 0.00 0.00 90591.35 7158.95 86749.66 00:10:08.743 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.743 Verification LBA range: start 0x80000 length 0x80000 00:10:08.743 Nvme2n3 : 5.09 1432.05 5.59 0.00 0.00 88754.13 17370.99 79590.71 00:10:08.743 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:08.743 Verification LBA range: start 0x0 length 0x20000 00:10:08.743 Nvme3n1 : 5.09 1395.25 5.45 0.00 0.00 90477.00 7158.95 86749.66 00:10:08.743 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:08.743 Verification LBA range: start 0x20000 length 0x20000 00:10:08.743 Nvme3n1 : 5.10 1431.63 5.59 0.00 0.00 88633.19 17686.82 80011.82 00:10:08.743 =================================================================================================================== 00:10:08.743 Total : 19757.18 77.18 0.00 0.00 89925.83 7158.95 89276.35 00:10:10.648 00:10:10.648 real 0m7.998s 00:10:10.649 user 0m14.503s 00:10:10.649 sys 0m0.314s 00:10:10.649 20:16:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:10.649 20:16:40 -- common/autotest_common.sh@10 -- # set +x 00:10:10.649 ************************************ 00:10:10.649 END TEST bdev_verify 00:10:10.649 ************************************ 00:10:10.649 20:16:40 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:10.649 20:16:40 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:10:10.649 20:16:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:10.649 20:16:40 -- common/autotest_common.sh@10 -- # set +x 00:10:10.649 ************************************ 00:10:10.649 START TEST bdev_verify_big_io 00:10:10.649 ************************************ 00:10:10.649 20:16:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:10.649 [2024-04-24 20:16:40.621786] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:10:10.649 [2024-04-24 20:16:40.621927] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68723 ] 00:10:10.649 [2024-04-24 20:16:40.794576] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:10.908 [2024-04-24 20:16:41.035802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.908 [2024-04-24 20:16:41.035833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.845 Running I/O for 5 seconds... 00:10:18.415 00:10:18.415 Latency(us) 00:10:18.415 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:18.415 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:18.415 Verification LBA range: start 0x0 length 0x5e80 00:10:18.415 Nvme0n1p1 : 5.66 130.50 8.16 0.00 0.00 930162.98 18107.94 970248.64 00:10:18.415 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:18.415 Verification LBA range: start 0x5e80 length 0x5e80 00:10:18.415 Nvme0n1p1 : 5.65 136.56 8.54 0.00 0.00 891714.12 30530.83 936559.45 00:10:18.415 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:18.415 Verification LBA range: start 0x0 length 0x5e7f 00:10:18.415 Nvme0n1p2 : 5.66 131.68 8.23 0.00 0.00 910712.32 84644.09 1387994.58 00:10:18.415 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:18.415 Verification LBA range: start 0x5e7f length 0x5e7f 00:10:18.415 Nvme0n1p2 : 5.71 128.82 8.05 0.00 0.00 934262.48 84644.09 1246499.98 00:10:18.415 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:18.415 Verification LBA range: start 0x0 length 0xa000 00:10:18.415 Nvme1n1 : 5.77 137.51 8.59 0.00 0.00 856072.94 50954.90 1246499.98 00:10:18.415 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:18.415 Verification LBA range: start 0xa000 length 0xa000 00:10:18.415 Nvme1n1 : 5.72 135.92 8.50 0.00 0.00 875171.52 56850.51 1502537.82 00:10:18.415 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:18.415 Verification LBA range: start 0x0 length 0x8000 00:10:18.415 Nvme2n1 : 5.81 141.03 8.81 0.00 0.00 816195.67 51376.01 1428421.60 00:10:18.415 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:18.415 Verification LBA range: start 0x8000 length 0x8000 00:10:18.415 Nvme2n1 : 5.78 144.30 9.02 0.00 0.00 797154.27 57271.62 845598.64 00:10:18.415 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:18.415 Verification LBA range: start 0x0 length 0x8000 00:10:18.415 Nvme2n2 : 5.78 141.23 8.83 0.00 0.00 797611.93 61482.77 1300402.69 00:10:18.415 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:18.415 Verification LBA range: start 0x8000 length 0x8000 00:10:18.415 Nvme2n2 : 5.82 149.40 9.34 0.00 0.00 757011.63 61061.65 842229.72 00:10:18.415 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:18.415 Verification LBA range: start 0x0 length 0x8000 00:10:18.415 Nvme2n3 : 5.84 150.07 9.38 0.00 0.00 733596.75 27161.91 1475586.47 00:10:18.415 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:18.415 Verification LBA range: start 0x8000 length 0x8000 00:10:18.415 Nvme2n3 : 5.85 158.13 9.88 0.00 0.00 703597.00 30530.83 875918.91 00:10:18.415 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:18.415 Verification LBA range: start 0x0 length 0x2000 00:10:18.415 Nvme3n1 : 5.89 170.86 10.68 0.00 0.00 631600.19 967.25 1489062.14 00:10:18.415 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:18.415 Verification LBA range: start 0x2000 length 0x2000 00:10:18.415 Nvme3n1 : 5.88 169.76 10.61 0.00 0.00 640746.20 8264.38 862443.23 00:10:18.415 =================================================================================================================== 00:10:18.415 Total : 2025.78 126.61 0.00 0.00 795457.12 967.25 1502537.82 00:10:19.790 00:10:19.790 real 0m9.421s 00:10:19.790 user 0m17.304s 00:10:19.790 sys 0m0.337s 00:10:19.790 20:16:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:19.790 20:16:49 -- common/autotest_common.sh@10 -- # set +x 00:10:19.790 ************************************ 00:10:19.790 END TEST bdev_verify_big_io 00:10:19.790 ************************************ 00:10:19.790 20:16:49 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:19.790 20:16:50 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:19.790 20:16:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:19.790 20:16:50 -- common/autotest_common.sh@10 -- # set +x 00:10:20.049 ************************************ 00:10:20.049 START TEST bdev_write_zeroes 00:10:20.049 ************************************ 00:10:20.049 20:16:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:20.049 [2024-04-24 20:16:50.192444] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:10:20.049 [2024-04-24 20:16:50.192592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68847 ] 00:10:20.309 [2024-04-24 20:16:50.364386] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.569 [2024-04-24 20:16:50.616745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.137 Running I/O for 1 seconds... 00:10:22.509 00:10:22.509 Latency(us) 00:10:22.509 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:22.509 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:22.509 Nvme0n1p1 : 1.02 8669.29 33.86 0.00 0.00 14701.50 11896.49 31373.06 00:10:22.509 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:22.509 Nvme0n1p2 : 1.02 8655.90 33.81 0.00 0.00 14700.25 12317.61 31583.61 00:10:22.509 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:22.509 Nvme1n1 : 1.02 8644.39 33.77 0.00 0.00 14671.98 12686.09 29478.04 00:10:22.509 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:22.509 Nvme2n1 : 1.03 8661.23 33.83 0.00 0.00 14538.71 12159.69 21266.30 00:10:22.509 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:22.509 Nvme2n2 : 1.03 8691.82 33.95 0.00 0.00 14465.83 6579.92 19897.68 00:10:22.509 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:22.509 Nvme2n3 : 1.03 8680.34 33.91 0.00 0.00 14455.87 6843.12 19371.28 00:10:22.509 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:22.509 Nvme3n1 : 1.03 8669.13 33.86 0.00 0.00 14432.39 7158.95 19160.73 00:10:22.509 =================================================================================================================== 00:10:22.509 Total : 60672.10 237.00 0.00 0.00 14565.91 6579.92 31583.61 00:10:23.888 00:10:23.888 real 0m3.639s 00:10:23.888 user 0m3.245s 00:10:23.888 sys 0m0.275s 00:10:23.888 ************************************ 00:10:23.888 END TEST bdev_write_zeroes 00:10:23.888 ************************************ 00:10:23.888 20:16:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:23.888 20:16:53 -- common/autotest_common.sh@10 -- # set +x 00:10:23.888 20:16:53 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:23.888 20:16:53 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:23.888 20:16:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.888 20:16:53 -- common/autotest_common.sh@10 -- # set +x 00:10:23.888 ************************************ 00:10:23.888 START TEST bdev_json_nonenclosed 00:10:23.888 ************************************ 00:10:23.888 20:16:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:23.888 [2024-04-24 20:16:53.988453] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:10:23.888 [2024-04-24 20:16:53.988581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68912 ] 00:10:24.147 [2024-04-24 20:16:54.159089] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.407 [2024-04-24 20:16:54.405365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.407 [2024-04-24 20:16:54.405469] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:24.407 [2024-04-24 20:16:54.405498] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:24.407 [2024-04-24 20:16:54.405512] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:24.666 00:10:24.666 real 0m0.979s 00:10:24.666 user 0m0.705s 00:10:24.666 sys 0m0.167s 00:10:24.666 20:16:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:24.666 ************************************ 00:10:24.666 END TEST bdev_json_nonenclosed 00:10:24.666 ************************************ 00:10:24.666 20:16:54 -- common/autotest_common.sh@10 -- # set +x 00:10:24.925 20:16:54 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:24.925 20:16:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:10:24.925 20:16:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:24.925 20:16:54 -- common/autotest_common.sh@10 -- # set +x 00:10:24.925 ************************************ 00:10:24.925 START TEST bdev_json_nonarray 00:10:24.925 ************************************ 00:10:24.925 20:16:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:24.925 [2024-04-24 20:16:55.121729] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:10:24.925 [2024-04-24 20:16:55.121869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68947 ] 00:10:25.184 [2024-04-24 20:16:55.276978] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.444 [2024-04-24 20:16:55.558733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.444 [2024-04-24 20:16:55.558847] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:25.444 [2024-04-24 20:16:55.558891] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:25.444 [2024-04-24 20:16:55.558905] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:26.015 00:10:26.015 real 0m0.990s 00:10:26.015 user 0m0.734s 00:10:26.015 sys 0m0.149s 00:10:26.015 ************************************ 00:10:26.015 END TEST bdev_json_nonarray 00:10:26.015 ************************************ 00:10:26.015 20:16:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:26.015 20:16:56 -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 20:16:56 -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:10:26.015 20:16:56 -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:10:26.015 20:16:56 -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:10:26.015 20:16:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:26.015 20:16:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:26.015 20:16:56 -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 ************************************ 00:10:26.015 START TEST bdev_gpt_uuid 00:10:26.015 ************************************ 00:10:26.015 20:16:56 -- common/autotest_common.sh@1111 -- # bdev_gpt_uuid 00:10:26.015 20:16:56 -- bdev/blockdev.sh@614 -- # local bdev 00:10:26.015 20:16:56 -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:10:26.015 20:16:56 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=68982 00:10:26.015 20:16:56 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:26.015 20:16:56 -- bdev/blockdev.sh@49 -- # waitforlisten 68982 00:10:26.015 20:16:56 -- common/autotest_common.sh@817 -- # '[' -z 68982 ']' 00:10:26.015 20:16:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.015 20:16:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:26.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.015 20:16:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.015 20:16:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:26.015 20:16:56 -- common/autotest_common.sh@10 -- # set +x 00:10:26.015 20:16:56 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:26.274 [2024-04-24 20:16:56.302326] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:10:26.274 [2024-04-24 20:16:56.302946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68982 ] 00:10:26.274 [2024-04-24 20:16:56.477800] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.534 [2024-04-24 20:16:56.715138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.469 20:16:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:27.469 20:16:57 -- common/autotest_common.sh@850 -- # return 0 00:10:27.469 20:16:57 -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:27.469 20:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:27.469 20:16:57 -- common/autotest_common.sh@10 -- # set +x 00:10:28.037 Some configs were skipped because the RPC state that can call them passed over. 00:10:28.037 20:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:28.037 20:16:57 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:10:28.037 20:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:28.037 20:16:57 -- common/autotest_common.sh@10 -- # set +x 00:10:28.037 20:16:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:28.037 20:16:57 -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:10:28.037 20:16:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:28.037 20:16:57 -- common/autotest_common.sh@10 -- # set +x 00:10:28.037 20:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:28.037 20:16:58 -- bdev/blockdev.sh@621 -- # bdev='[ 00:10:28.037 { 00:10:28.037 "name": "Nvme0n1p1", 00:10:28.037 "aliases": [ 00:10:28.037 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:10:28.037 ], 00:10:28.037 "product_name": "GPT Disk", 00:10:28.037 "block_size": 4096, 00:10:28.037 "num_blocks": 774144, 00:10:28.037 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:28.037 "md_size": 64, 00:10:28.037 "md_interleave": false, 00:10:28.037 "dif_type": 0, 00:10:28.037 "assigned_rate_limits": { 00:10:28.037 "rw_ios_per_sec": 0, 00:10:28.037 "rw_mbytes_per_sec": 0, 00:10:28.037 "r_mbytes_per_sec": 0, 00:10:28.037 "w_mbytes_per_sec": 0 00:10:28.037 }, 00:10:28.037 "claimed": false, 00:10:28.037 "zoned": false, 00:10:28.037 "supported_io_types": { 00:10:28.037 "read": true, 00:10:28.037 "write": true, 00:10:28.037 "unmap": true, 00:10:28.037 "write_zeroes": true, 00:10:28.037 "flush": true, 00:10:28.037 "reset": true, 00:10:28.037 "compare": true, 00:10:28.037 "compare_and_write": false, 00:10:28.037 "abort": true, 00:10:28.037 "nvme_admin": false, 00:10:28.037 "nvme_io": false 00:10:28.037 }, 00:10:28.037 "driver_specific": { 00:10:28.037 "gpt": { 00:10:28.037 "base_bdev": "Nvme0n1", 00:10:28.037 "offset_blocks": 256, 00:10:28.037 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:10:28.037 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:10:28.037 "partition_name": "SPDK_TEST_first" 00:10:28.037 } 00:10:28.037 } 00:10:28.037 } 00:10:28.037 ]' 00:10:28.037 20:16:58 -- bdev/blockdev.sh@622 -- # jq -r length 00:10:28.037 20:16:58 -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:10:28.037 20:16:58 -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:10:28.037 20:16:58 -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:28.037 20:16:58 -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:28.037 20:16:58 -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:10:28.037 20:16:58 -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:28.037 20:16:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:28.037 20:16:58 -- common/autotest_common.sh@10 -- # set +x 00:10:28.037 20:16:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:28.037 20:16:58 -- bdev/blockdev.sh@626 -- # bdev='[ 00:10:28.037 { 00:10:28.037 "name": "Nvme0n1p2", 00:10:28.037 "aliases": [ 00:10:28.037 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:10:28.037 ], 00:10:28.037 "product_name": "GPT Disk", 00:10:28.037 "block_size": 4096, 00:10:28.037 "num_blocks": 774143, 00:10:28.037 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:28.037 "md_size": 64, 00:10:28.037 "md_interleave": false, 00:10:28.037 "dif_type": 0, 00:10:28.037 "assigned_rate_limits": { 00:10:28.037 "rw_ios_per_sec": 0, 00:10:28.037 "rw_mbytes_per_sec": 0, 00:10:28.037 "r_mbytes_per_sec": 0, 00:10:28.037 "w_mbytes_per_sec": 0 00:10:28.037 }, 00:10:28.037 "claimed": false, 00:10:28.037 "zoned": false, 00:10:28.037 "supported_io_types": { 00:10:28.037 "read": true, 00:10:28.037 "write": true, 00:10:28.037 "unmap": true, 00:10:28.037 "write_zeroes": true, 00:10:28.037 "flush": true, 00:10:28.037 "reset": true, 00:10:28.037 "compare": true, 00:10:28.037 "compare_and_write": false, 00:10:28.037 "abort": true, 00:10:28.037 "nvme_admin": false, 00:10:28.037 "nvme_io": false 00:10:28.037 }, 00:10:28.037 "driver_specific": { 00:10:28.037 "gpt": { 00:10:28.037 "base_bdev": "Nvme0n1", 00:10:28.037 "offset_blocks": 774400, 00:10:28.037 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:10:28.037 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:10:28.037 "partition_name": "SPDK_TEST_second" 00:10:28.037 } 00:10:28.037 } 00:10:28.037 } 00:10:28.037 ]' 00:10:28.037 20:16:58 -- bdev/blockdev.sh@627 -- # jq -r length 00:10:28.037 20:16:58 -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:10:28.037 20:16:58 -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:10:28.037 20:16:58 -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:28.037 20:16:58 -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:10:28.296 20:16:58 -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:10:28.296 20:16:58 -- bdev/blockdev.sh@631 -- # killprocess 68982 00:10:28.296 20:16:58 -- common/autotest_common.sh@936 -- # '[' -z 68982 ']' 00:10:28.296 20:16:58 -- common/autotest_common.sh@940 -- # kill -0 68982 00:10:28.296 20:16:58 -- common/autotest_common.sh@941 -- # uname 00:10:28.296 20:16:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:28.296 20:16:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68982 00:10:28.296 20:16:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:28.296 20:16:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:28.296 killing process with pid 68982 00:10:28.296 20:16:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68982' 00:10:28.296 20:16:58 -- common/autotest_common.sh@955 -- # kill 68982 00:10:28.296 20:16:58 -- common/autotest_common.sh@960 -- # wait 68982 00:10:30.831 00:10:30.831 real 0m4.479s 00:10:30.831 user 0m4.543s 00:10:30.831 sys 0m0.535s 00:10:30.831 20:17:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:30.831 20:17:00 -- common/autotest_common.sh@10 -- # set +x 00:10:30.831 ************************************ 00:10:30.831 END TEST bdev_gpt_uuid 00:10:30.831 ************************************ 00:10:30.831 20:17:00 -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:10:30.831 20:17:00 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:10:30.831 20:17:00 -- bdev/blockdev.sh@811 -- # cleanup 00:10:30.831 20:17:00 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:30.831 20:17:00 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:30.831 20:17:00 -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:10:30.831 20:17:00 -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:10:30.831 20:17:00 -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:10:30.831 20:17:00 -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:31.091 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:31.350 Waiting for block devices as requested 00:10:31.350 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.609 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.609 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:31.868 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:37.212 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:37.212 20:17:06 -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:10:37.212 20:17:06 -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:10:37.212 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:10:37.212 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:10:37.212 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:10:37.212 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:10:37.212 20:17:07 -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:10:37.212 00:10:37.212 real 1m8.087s 00:10:37.212 user 1m23.671s 00:10:37.212 sys 0m11.698s 00:10:37.212 20:17:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:37.212 20:17:07 -- common/autotest_common.sh@10 -- # set +x 00:10:37.212 ************************************ 00:10:37.212 END TEST blockdev_nvme_gpt 00:10:37.212 ************************************ 00:10:37.212 20:17:07 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:37.212 20:17:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:37.212 20:17:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:37.212 20:17:07 -- common/autotest_common.sh@10 -- # set +x 00:10:37.212 ************************************ 00:10:37.212 START TEST nvme 00:10:37.212 ************************************ 00:10:37.212 20:17:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:10:37.473 * Looking for test storage... 00:10:37.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:37.473 20:17:07 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:38.042 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:38.610 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:38.869 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:38.869 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:38.869 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:38.869 20:17:09 -- nvme/nvme.sh@79 -- # uname 00:10:38.869 20:17:09 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:10:38.869 20:17:09 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:10:38.869 20:17:09 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:10:38.869 20:17:09 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:10:38.869 20:17:09 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:10:38.869 20:17:09 -- common/autotest_common.sh@1055 -- # echo 0 00:10:38.869 20:17:09 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:10:38.869 20:17:09 -- common/autotest_common.sh@1057 -- # stubpid=69636 00:10:38.869 Waiting for stub to ready for secondary processes... 00:10:38.869 20:17:09 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:10:39.128 20:17:09 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:39.128 20:17:09 -- common/autotest_common.sh@1061 -- # [[ -e /proc/69636 ]] 00:10:39.128 20:17:09 -- common/autotest_common.sh@1062 -- # sleep 1s 00:10:39.128 [2024-04-24 20:17:09.159326] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:10:39.128 [2024-04-24 20:17:09.159564] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:10:40.065 20:17:10 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:40.065 20:17:10 -- common/autotest_common.sh@1061 -- # [[ -e /proc/69636 ]] 00:10:40.065 20:17:10 -- common/autotest_common.sh@1062 -- # sleep 1s 00:10:40.065 [2024-04-24 20:17:10.150073] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:40.324 [2024-04-24 20:17:10.378787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.324 [2024-04-24 20:17:10.378959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.324 [2024-04-24 20:17:10.378988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.324 [2024-04-24 20:17:10.397022] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:10:40.324 [2024-04-24 20:17:10.397058] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:40.324 [2024-04-24 20:17:10.410629] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:10:40.324 [2024-04-24 20:17:10.410754] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:10:40.324 [2024-04-24 20:17:10.414564] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:40.324 [2024-04-24 20:17:10.415155] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:10:40.324 [2024-04-24 20:17:10.415513] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:10:40.324 [2024-04-24 20:17:10.423685] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:40.324 [2024-04-24 20:17:10.424117] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:10:40.324 [2024-04-24 20:17:10.424534] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:10:40.324 [2024-04-24 20:17:10.431001] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:10:40.324 [2024-04-24 20:17:10.431262] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:10:40.324 [2024-04-24 20:17:10.431370] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:10:40.324 [2024-04-24 20:17:10.431467] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:10:40.324 [2024-04-24 20:17:10.431576] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:10:40.892 20:17:11 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:10:40.892 done. 00:10:40.892 20:17:11 -- common/autotest_common.sh@1064 -- # echo done. 00:10:40.892 20:17:11 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:40.892 20:17:11 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:10:40.892 20:17:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:40.892 20:17:11 -- common/autotest_common.sh@10 -- # set +x 00:10:41.152 ************************************ 00:10:41.152 START TEST nvme_reset 00:10:41.152 ************************************ 00:10:41.152 20:17:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:10:41.411 Initializing NVMe Controllers 00:10:41.411 Skipping QEMU NVMe SSD at 0000:00:10.0 00:10:41.411 Skipping QEMU NVMe SSD at 0000:00:11.0 00:10:41.411 Skipping QEMU NVMe SSD at 0000:00:13.0 00:10:41.411 Skipping QEMU NVMe SSD at 0000:00:12.0 00:10:41.411 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:10:41.411 ************************************ 00:10:41.411 END TEST nvme_reset 00:10:41.411 ************************************ 00:10:41.411 00:10:41.411 real 0m0.301s 00:10:41.411 user 0m0.136s 00:10:41.411 sys 0m0.119s 00:10:41.411 20:17:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:41.411 20:17:11 -- common/autotest_common.sh@10 -- # set +x 00:10:41.411 20:17:11 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:10:41.411 20:17:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:41.411 20:17:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:41.411 20:17:11 -- common/autotest_common.sh@10 -- # set +x 00:10:41.411 ************************************ 00:10:41.411 START TEST nvme_identify 00:10:41.411 ************************************ 00:10:41.411 20:17:11 -- common/autotest_common.sh@1111 -- # nvme_identify 00:10:41.411 20:17:11 -- nvme/nvme.sh@12 -- # bdfs=() 00:10:41.411 20:17:11 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:10:41.411 20:17:11 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:10:41.411 20:17:11 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:10:41.411 20:17:11 -- common/autotest_common.sh@1499 -- # bdfs=() 00:10:41.411 20:17:11 -- common/autotest_common.sh@1499 -- # local bdfs 00:10:41.411 20:17:11 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:41.411 20:17:11 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:41.411 20:17:11 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:10:41.670 20:17:11 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:10:41.670 20:17:11 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:41.670 20:17:11 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:10:41.932 ===================================================== 00:10:41.932 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:41.932 ===================================================== 00:10:41.932 Controller Capabilities/Features 00:10:41.932 ================================ 00:10:41.932 Vendor ID: 1b36 00:10:41.932 Subsystem Vendor ID: 1af4 00:10:41.932 Serial Number: 12340 00:10:41.932 Model Number: QEMU NVMe Ctrl 00:10:41.932 Firmware Version: 8.0.0 00:10:41.932 Recommended Arb Burst: 6 00:10:41.932 IEEE OUI Identifier: 00 54 52 00:10:41.932 Multi-path I/O 00:10:41.932 May have multiple subsystem ports: No 00:10:41.932 May have multiple controllers: No 00:10:41.932 Associated with SR-IOV VF: No 00:10:41.932 Max Data Transfer Size: 524288 00:10:41.932 Max Number of Namespaces: 256 00:10:41.932 Max Number of I/O Queues: 64 00:10:41.932 NVMe Specification Version (VS): 1.4 00:10:41.932 NVMe Specification Version (Identify): 1.4 00:10:41.932 Maximum Queue Entries: 2048 00:10:41.932 Contiguous Queues Required: Yes 00:10:41.932 Arbitration Mechanisms Supported 00:10:41.932 Weighted Round Robin: Not Supported 00:10:41.932 Vendor Specific: Not Supported 00:10:41.932 Reset Timeout: 7500 ms 00:10:41.932 Doorbell Stride: 4 bytes 00:10:41.932 NVM Subsystem Reset: Not Supported 00:10:41.932 Command Sets Supported 00:10:41.932 NVM Command Set: Supported 00:10:41.932 Boot Partition: Not Supported 00:10:41.932 Memory Page Size Minimum: 4096 bytes 00:10:41.932 Memory Page Size Maximum: 65536 bytes 00:10:41.932 Persistent Memory Region: Not Supported 00:10:41.932 Optional Asynchronous Events Supported 00:10:41.932 Namespace Attribute Notices: Supported 00:10:41.932 Firmware Activation Notices: Not Supported 00:10:41.932 ANA Change Notices: Not Supported 00:10:41.932 PLE Aggregate Log Change Notices: Not Supported 00:10:41.932 LBA Status Info Alert Notices: Not Supported 00:10:41.932 EGE Aggregate Log Change Notices: Not Supported 00:10:41.932 Normal NVM Subsystem Shutdown event: Not Supported 00:10:41.932 Zone Descriptor Change Notices: Not Supported 00:10:41.932 Discovery Log Change Notices: Not Supported 00:10:41.932 Controller Attributes 00:10:41.932 128-bit Host Identifier: Not Supported 00:10:41.932 Non-Operational Permissive Mode: Not Supported 00:10:41.932 NVM Sets: Not Supported 00:10:41.932 Read Recovery Levels: Not Supported 00:10:41.932 Endurance Groups: Not Supported 00:10:41.932 Predictable Latency Mode: Not Supported 00:10:41.932 Traffic Based Keep ALive: Not Supported 00:10:41.932 Namespace Granularity: Not Supported 00:10:41.932 SQ Associations: Not Supported 00:10:41.932 UUID List: Not Supported 00:10:41.932 Multi-Domain Subsystem: Not Supported 00:10:41.932 Fixed Capacity Management: Not Supported 00:10:41.932 Variable Capacity Management: Not Supported 00:10:41.932 Delete Endurance Group: Not Supported 00:10:41.932 Delete NVM Set: Not Supported 00:10:41.932 Extended LBA Formats Supported: Supported 00:10:41.932 Flexible Data Placement Supported: Not Supported 00:10:41.932 00:10:41.932 Controller Memory Buffer Support 00:10:41.932 ================================ 00:10:41.932 Supported: No 00:10:41.932 00:10:41.932 Persistent Memory Region Support 00:10:41.932 ================================ 00:10:41.932 Supported: No 00:10:41.932 00:10:41.932 Admin Command Set Attributes 00:10:41.932 ============================ 00:10:41.932 Security Send/Receive: Not Supported 00:10:41.932 Format NVM: Supported 00:10:41.932 Firmware Activate/Download: Not Supported 00:10:41.932 Namespace Management: Supported 00:10:41.932 Device Self-Test: Not Supported 00:10:41.932 Directives: Supported 00:10:41.932 NVMe-MI: Not Supported 00:10:41.932 Virtualization Management: Not Supported 00:10:41.932 Doorbell Buffer Config: Supported 00:10:41.932 Get LBA Status Capability: Not Supported 00:10:41.932 Command & Feature Lockdown Capability: Not Supported 00:10:41.932 Abort Command Limit: 4 00:10:41.932 Async Event Request Limit: 4 00:10:41.932 Number of Firmware Slots: N/A 00:10:41.932 Firmware Slot 1 Read-Only: N/A 00:10:41.932 Firmware Activation Without Reset: N/A 00:10:41.932 Multiple Update Detection Support: N/A 00:10:41.932 Firmware Update Gr[2024-04-24 20:17:11.980252] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 69673 terminated unexpected 00:10:41.932 anularity: No Information Provided 00:10:41.932 Per-Namespace SMART Log: Yes 00:10:41.932 Asymmetric Namespace Access Log Page: Not Supported 00:10:41.932 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:41.932 Command Effects Log Page: Supported 00:10:41.932 Get Log Page Extended Data: Supported 00:10:41.932 Telemetry Log Pages: Not Supported 00:10:41.932 Persistent Event Log Pages: Not Supported 00:10:41.932 Supported Log Pages Log Page: May Support 00:10:41.932 Commands Supported & Effects Log Page: Not Supported 00:10:41.932 Feature Identifiers & Effects Log Page:May Support 00:10:41.932 NVMe-MI Commands & Effects Log Page: May Support 00:10:41.932 Data Area 4 for Telemetry Log: Not Supported 00:10:41.932 Error Log Page Entries Supported: 1 00:10:41.932 Keep Alive: Not Supported 00:10:41.932 00:10:41.932 NVM Command Set Attributes 00:10:41.932 ========================== 00:10:41.932 Submission Queue Entry Size 00:10:41.932 Max: 64 00:10:41.932 Min: 64 00:10:41.932 Completion Queue Entry Size 00:10:41.932 Max: 16 00:10:41.932 Min: 16 00:10:41.932 Number of Namespaces: 256 00:10:41.932 Compare Command: Supported 00:10:41.932 Write Uncorrectable Command: Not Supported 00:10:41.932 Dataset Management Command: Supported 00:10:41.932 Write Zeroes Command: Supported 00:10:41.932 Set Features Save Field: Supported 00:10:41.932 Reservations: Not Supported 00:10:41.932 Timestamp: Supported 00:10:41.932 Copy: Supported 00:10:41.932 Volatile Write Cache: Present 00:10:41.932 Atomic Write Unit (Normal): 1 00:10:41.932 Atomic Write Unit (PFail): 1 00:10:41.932 Atomic Compare & Write Unit: 1 00:10:41.932 Fused Compare & Write: Not Supported 00:10:41.932 Scatter-Gather List 00:10:41.932 SGL Command Set: Supported 00:10:41.932 SGL Keyed: Not Supported 00:10:41.932 SGL Bit Bucket Descriptor: Not Supported 00:10:41.932 SGL Metadata Pointer: Not Supported 00:10:41.932 Oversized SGL: Not Supported 00:10:41.932 SGL Metadata Address: Not Supported 00:10:41.932 SGL Offset: Not Supported 00:10:41.932 Transport SGL Data Block: Not Supported 00:10:41.932 Replay Protected Memory Block: Not Supported 00:10:41.932 00:10:41.932 Firmware Slot Information 00:10:41.932 ========================= 00:10:41.932 Active slot: 1 00:10:41.932 Slot 1 Firmware Revision: 1.0 00:10:41.932 00:10:41.932 00:10:41.932 Commands Supported and Effects 00:10:41.932 ============================== 00:10:41.932 Admin Commands 00:10:41.932 -------------- 00:10:41.932 Delete I/O Submission Queue (00h): Supported 00:10:41.932 Create I/O Submission Queue (01h): Supported 00:10:41.932 Get Log Page (02h): Supported 00:10:41.932 Delete I/O Completion Queue (04h): Supported 00:10:41.932 Create I/O Completion Queue (05h): Supported 00:10:41.932 Identify (06h): Supported 00:10:41.932 Abort (08h): Supported 00:10:41.932 Set Features (09h): Supported 00:10:41.932 Get Features (0Ah): Supported 00:10:41.932 Asynchronous Event Request (0Ch): Supported 00:10:41.932 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:41.932 Directive Send (19h): Supported 00:10:41.932 Directive Receive (1Ah): Supported 00:10:41.932 Virtualization Management (1Ch): Supported 00:10:41.932 Doorbell Buffer Config (7Ch): Supported 00:10:41.932 Format NVM (80h): Supported LBA-Change 00:10:41.932 I/O Commands 00:10:41.932 ------------ 00:10:41.932 Flush (00h): Supported LBA-Change 00:10:41.932 Write (01h): Supported LBA-Change 00:10:41.932 Read (02h): Supported 00:10:41.933 Compare (05h): Supported 00:10:41.933 Write Zeroes (08h): Supported LBA-Change 00:10:41.933 Dataset Management (09h): Supported LBA-Change 00:10:41.933 Unknown (0Ch): Supported 00:10:41.933 Unknown (12h): Supported 00:10:41.933 Copy (19h): Supported LBA-Change 00:10:41.933 Unknown (1Dh): Supported LBA-Change 00:10:41.933 00:10:41.933 Error Log 00:10:41.933 ========= 00:10:41.933 00:10:41.933 Arbitration 00:10:41.933 =========== 00:10:41.933 Arbitration Burst: no limit 00:10:41.933 00:10:41.933 Power Management 00:10:41.933 ================ 00:10:41.933 Number of Power States: 1 00:10:41.933 Current Power State: Power State #0 00:10:41.933 Power State #0: 00:10:41.933 Max Power: 25.00 W 00:10:41.933 Non-Operational State: Operational 00:10:41.933 Entry Latency: 16 microseconds 00:10:41.933 Exit Latency: 4 microseconds 00:10:41.933 Relative Read Throughput: 0 00:10:41.933 Relative Read Latency: 0 00:10:41.933 Relative Write Throughput: 0 00:10:41.933 Relative Write Latency: 0 00:10:41.933 Idle Power[2024-04-24 20:17:11.981871] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 69673 terminated unexpected 00:10:41.933 : Not Reported 00:10:41.933 Active Power: Not Reported 00:10:41.933 Non-Operational Permissive Mode: Not Supported 00:10:41.933 00:10:41.933 Health Information 00:10:41.933 ================== 00:10:41.933 Critical Warnings: 00:10:41.933 Available Spare Space: OK 00:10:41.933 Temperature: OK 00:10:41.933 Device Reliability: OK 00:10:41.933 Read Only: No 00:10:41.933 Volatile Memory Backup: OK 00:10:41.933 Current Temperature: 323 Kelvin (50 Celsius) 00:10:41.933 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:41.933 Available Spare: 0% 00:10:41.933 Available Spare Threshold: 0% 00:10:41.933 Life Percentage Used: 0% 00:10:41.933 Data Units Read: 1173 00:10:41.933 Data Units Written: 1005 00:10:41.933 Host Read Commands: 54933 00:10:41.933 Host Write Commands: 53401 00:10:41.933 Controller Busy Time: 0 minutes 00:10:41.933 Power Cycles: 0 00:10:41.933 Power On Hours: 0 hours 00:10:41.933 Unsafe Shutdowns: 0 00:10:41.933 Unrecoverable Media Errors: 0 00:10:41.933 Lifetime Error Log Entries: 0 00:10:41.933 Warning Temperature Time: 0 minutes 00:10:41.933 Critical Temperature Time: 0 minutes 00:10:41.933 00:10:41.933 Number of Queues 00:10:41.933 ================ 00:10:41.933 Number of I/O Submission Queues: 64 00:10:41.933 Number of I/O Completion Queues: 64 00:10:41.933 00:10:41.933 ZNS Specific Controller Data 00:10:41.933 ============================ 00:10:41.933 Zone Append Size Limit: 0 00:10:41.933 00:10:41.933 00:10:41.933 Active Namespaces 00:10:41.933 ================= 00:10:41.933 Namespace ID:1 00:10:41.933 Error Recovery Timeout: Unlimited 00:10:41.933 Command Set Identifier: NVM (00h) 00:10:41.933 Deallocate: Supported 00:10:41.933 Deallocated/Unwritten Error: Supported 00:10:41.933 Deallocated Read Value: All 0x00 00:10:41.933 Deallocate in Write Zeroes: Not Supported 00:10:41.933 Deallocated Guard Field: 0xFFFF 00:10:41.933 Flush: Supported 00:10:41.933 Reservation: Not Supported 00:10:41.933 Metadata Transferred as: Separate Metadata Buffer 00:10:41.933 Namespace Sharing Capabilities: Private 00:10:41.933 Size (in LBAs): 1548666 (5GiB) 00:10:41.933 Capacity (in LBAs): 1548666 (5GiB) 00:10:41.933 Utilization (in LBAs): 1548666 (5GiB) 00:10:41.933 Thin Provisioning: Not Supported 00:10:41.933 Per-NS Atomic Units: No 00:10:41.933 Maximum Single Source Range Length: 128 00:10:41.933 Maximum Copy Length: 128 00:10:41.933 Maximum Source Range Count: 128 00:10:41.933 NGUID/EUI64 Never Reused: No 00:10:41.933 Namespace Write Protected: No 00:10:41.933 Number of LBA Formats: 8 00:10:41.933 Current LBA Format: LBA Format #07 00:10:41.933 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:41.933 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:41.933 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:41.933 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:41.933 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:41.933 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:41.933 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:41.933 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:41.933 00:10:41.933 ===================================================== 00:10:41.933 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:41.933 ===================================================== 00:10:41.933 Controller Capabilities/Features 00:10:41.933 ================================ 00:10:41.933 Vendor ID: 1b36 00:10:41.933 Subsystem Vendor ID: 1af4 00:10:41.933 Serial Number: 12341 00:10:41.933 Model Number: QEMU NVMe Ctrl 00:10:41.933 Firmware Version: 8.0.0 00:10:41.933 Recommended Arb Burst: 6 00:10:41.933 IEEE OUI Identifier: 00 54 52 00:10:41.933 Multi-path I/O 00:10:41.933 May have multiple subsystem ports: No 00:10:41.933 May have multiple controllers: No 00:10:41.933 Associated with SR-IOV VF: No 00:10:41.933 Max Data Transfer Size: 524288 00:10:41.933 Max Number of Namespaces: 256 00:10:41.933 Max Number of I/O Queues: 64 00:10:41.933 NVMe Specification Version (VS): 1.4 00:10:41.933 NVMe Specification Version (Identify): 1.4 00:10:41.933 Maximum Queue Entries: 2048 00:10:41.933 Contiguous Queues Required: Yes 00:10:41.933 Arbitration Mechanisms Supported 00:10:41.933 Weighted Round Robin: Not Supported 00:10:41.933 Vendor Specific: Not Supported 00:10:41.933 Reset Timeout: 7500 ms 00:10:41.933 Doorbell Stride: 4 bytes 00:10:41.933 NVM Subsystem Reset: Not Supported 00:10:41.933 Command Sets Supported 00:10:41.933 NVM Command Set: Supported 00:10:41.933 Boot Partition: Not Supported 00:10:41.933 Memory Page Size Minimum: 4096 bytes 00:10:41.933 Memory Page Size Maximum: 65536 bytes 00:10:41.933 Persistent Memory Region: Not Supported 00:10:41.933 Optional Asynchronous Events Supported 00:10:41.933 Namespace Attribute Notices: Supported 00:10:41.933 Firmware Activation Notices: Not Supported 00:10:41.933 ANA Change Notices: Not Supported 00:10:41.933 PLE Aggregate Log Change Notices: Not Supported 00:10:41.933 LBA Status Info Alert Notices: Not Supported 00:10:41.933 EGE Aggregate Log Change Notices: Not Supported 00:10:41.933 Normal NVM Subsystem Shutdown event: Not Supported 00:10:41.933 Zone Descriptor Change Notices: Not Supported 00:10:41.933 Discovery Log Change Notices: Not Supported 00:10:41.933 Controller Attributes 00:10:41.933 128-bit Host Identifier: Not Supported 00:10:41.933 Non-Operational Permissive Mode: Not Supported 00:10:41.933 NVM Sets: Not Supported 00:10:41.933 Read Recovery Levels: Not Supported 00:10:41.933 Endurance Groups: Not Supported 00:10:41.933 Predictable Latency Mode: Not Supported 00:10:41.933 Traffic Based Keep ALive: Not Supported 00:10:41.933 Namespace Granularity: Not Supported 00:10:41.933 SQ Associations: Not Supported 00:10:41.933 UUID List: Not Supported 00:10:41.933 Multi-Domain Subsystem: Not Supported 00:10:41.933 Fixed Capacity Management: Not Supported 00:10:41.933 Variable Capacity Management: Not Supported 00:10:41.933 Delete Endurance Group: Not Supported 00:10:41.933 Delete NVM Set: Not Supported 00:10:41.933 Extended LBA Formats Supported: Supported 00:10:41.933 Flexible Data Placement Supported: Not Supported 00:10:41.933 00:10:41.933 Controller Memory Buffer Support 00:10:41.933 ================================ 00:10:41.933 Supported: No 00:10:41.933 00:10:41.933 Persistent Memory Region Support 00:10:41.933 ================================ 00:10:41.933 Supported: No 00:10:41.933 00:10:41.933 Admin Command Set Attributes 00:10:41.933 ============================ 00:10:41.933 Security Send/Receive: Not Supported 00:10:41.933 Format NVM: Supported 00:10:41.933 Firmware Activate/Download: Not Supported 00:10:41.933 Namespace Management: Supported 00:10:41.933 Device Self-Test: Not Supported 00:10:41.933 Directives: Supported 00:10:41.933 NVMe-MI: Not Supported 00:10:41.933 Virtualization Management: Not Supported 00:10:41.933 Doorbell Buffer Config: Supported 00:10:41.933 Get LBA Status Capability: Not Supported 00:10:41.933 Command & Feature Lockdown Capability: Not Supported 00:10:41.933 Abort Command Limit: 4 00:10:41.933 Async Event Request Limit: 4 00:10:41.933 Number of Firmware Slots: N/A 00:10:41.933 Firmware Slot 1 Read-Only: N/A 00:10:41.933 Firmware Activation Without Reset: N/A 00:10:41.933 Multiple Update Detection Support: N/A 00:10:41.933 Firmware Update Granularity: No Information Provided 00:10:41.933 Per-Namespace SMART Log: Yes 00:10:41.933 Asymmetric Namespace Access Log Page: Not Supported 00:10:41.933 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:41.933 Command Effects Log Page: Supported 00:10:41.933 Get Log Page Extended Data: Supported 00:10:41.933 Telemetry Log Pages: Not Supported 00:10:41.933 Persistent Event Log Pages: Not Supported 00:10:41.933 Supported Log Pages Log Page: May Support 00:10:41.934 Commands Supported & Effects Log Page: Not Supported 00:10:41.934 Feature Identifiers & Effects Log Page:May Support 00:10:41.934 NVMe-MI Commands & Effects Log Page: May Support 00:10:41.934 Data Area 4 for Telemetry Log: Not Supported 00:10:41.934 Error Log Page Entries Supported: 1 00:10:41.934 Keep Alive: Not Supported 00:10:41.934 00:10:41.934 NVM Command Set Attributes 00:10:41.934 ========================== 00:10:41.934 Submission Queue Entry Size 00:10:41.934 Max: 64 00:10:41.934 Min: 64 00:10:41.934 Completion Queue Entry Size 00:10:41.934 Max: 16 00:10:41.934 Min: 16 00:10:41.934 Number of Namespaces: 256 00:10:41.934 Compare Command: Supported 00:10:41.934 Write Uncorrectable Command: Not Supported 00:10:41.934 Dataset Management Command: Supported 00:10:41.934 Write Zeroes Command: Supported 00:10:41.934 Set Features Save Field: Supported 00:10:41.934 Reservations: Not Supported 00:10:41.934 Timestamp: Supported 00:10:41.934 Copy: Supported 00:10:41.934 Volatile Write Cache: Present 00:10:41.934 Atomic Write Unit (Normal): 1 00:10:41.934 Atomic Write Unit (PFail): 1 00:10:41.934 Atomic Compare & Write Unit: 1 00:10:41.934 Fused Compare & Write: Not Supported 00:10:41.934 Scatter-Gather List 00:10:41.934 SGL Command Set: Supported 00:10:41.934 SGL Keyed: Not Supported 00:10:41.934 SGL Bit Bucket Descriptor: Not Supported 00:10:41.934 SGL Metadata Pointer: Not Supported 00:10:41.934 Oversized SGL: Not Supported 00:10:41.934 SGL Metadata Address: Not Supported 00:10:41.934 SGL Offset: Not Supported 00:10:41.934 Transport SGL Data Block: Not Supported 00:10:41.934 Replay Protected Memory Block: Not Supported 00:10:41.934 00:10:41.934 Firmware Slot Information 00:10:41.934 ========================= 00:10:41.934 Active slot: 1 00:10:41.934 Slot 1 Firmware Revision: 1.0 00:10:41.934 00:10:41.934 00:10:41.934 Commands Supported and Effects 00:10:41.934 ============================== 00:10:41.934 Admin Commands 00:10:41.934 -------------- 00:10:41.934 Delete I/O Submission Queue (00h): Supported 00:10:41.934 Create I/O Submission Queue (01h): Supported 00:10:41.934 Get Log Page (02h): Supported 00:10:41.934 Delete I/O Completion Queue (04h): Supported 00:10:41.934 Create I/O Completion Queue (05h): Supported 00:10:41.934 Identify (06h): Supported 00:10:41.934 Abort (08h): Supported 00:10:41.934 Set Features (09h): Supported 00:10:41.934 Get Features (0Ah): Supported 00:10:41.934 Asynchronous Event Request (0Ch): Supported 00:10:41.934 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:41.934 Directive Send (19h): Supported 00:10:41.934 Directive Receive (1Ah): Supported 00:10:41.934 Virtualization Management (1Ch): Supported 00:10:41.934 Doorbell Buffer Config (7Ch): Supported 00:10:41.934 Format NVM (80h): Supported LBA-Change 00:10:41.934 I/O Commands 00:10:41.934 ------------ 00:10:41.934 Flush (00h): Supported LBA-Change 00:10:41.934 Write (01h): Supported LBA-Change 00:10:41.934 Read (02h): Supported 00:10:41.934 Compare (05h): Supported 00:10:41.934 Write Zeroes (08h): Supported LBA-Change 00:10:41.934 Dataset Management (09h): Supported LBA-Change 00:10:41.934 Unknown (0Ch): Supported 00:10:41.934 Unknown (12h): Supported 00:10:41.934 Copy (19h): Supported LBA-Change 00:10:41.934 Unknown (1Dh): Supported LBA-Change 00:10:41.934 00:10:41.934 Error Log 00:10:41.934 ========= 00:10:41.934 00:10:41.934 Arbitration 00:10:41.934 =========== 00:10:41.934 Arbitration Burst: no limit 00:10:41.934 00:10:41.934 Power Management 00:10:41.934 ================ 00:10:41.934 Number of Power States: 1 00:10:41.934 Current Power State: Power State #0 00:10:41.934 Power State #0: 00:10:41.934 Max Power: 25.00 W 00:10:41.934 Non-Operational State: Operational 00:10:41.934 Entry Latency: 16 microseconds 00:10:41.934 Exit Latency: 4 microseconds 00:10:41.934 Relative Read Throughput: 0 00:10:41.934 Relative Read Latency: 0 00:10:41.934 Relative Write Throughput: 0 00:10:41.934 Relative Write Latency: 0 00:10:41.934 Idle Power: Not Reported 00:10:41.934 Active Power: Not Reported 00:10:41.934 Non-Operational Permissive Mode: Not Supported 00:10:41.934 00:10:41.934 Health Information 00:10:41.934 ================== 00:10:41.934 Critical Warnings: 00:10:41.934 Available Spare Space: OK 00:10:41.934 Temperature: OK 00:10:41.934 Device Reliability: OK 00:10:41.934 Read Only: No 00:10:41.934 Volatile Memory Backup: OK 00:10:41.934 Current Temperature: 323 Kelvin (50 Celsius) 00:10:41.934 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:41.934 Available Spare: 0% 00:10:41.934 Available Spare Threshold: 0% 00:10:41.934 Life Percentage Used: 0% 00:10:41.934 Data Units Read: 870 00:10:41.934 Data Units Written: 715 00:10:41.934 Host Read Commands: 39501 00:10:41.934 Host Write Commands: 37135 00:10:41.934 Controller Busy Time: 0 minutes 00:10:41.934 Power Cycles: 0 00:10:41.934 Power On Hours: 0 hours 00:10:41.934 Unsafe Shutdowns: 0 00:10:41.934 Unrecoverable Media Errors: 0 00:10:41.934 Lifetime Error Log Entries: 0 00:10:41.934 Warning Temperature Time: 0 minutes 00:10:41.934 Critical Temperature Time: 0 minutes 00:10:41.934 00:10:41.934 Number of Queues 00:10:41.934 ================ 00:10:41.934 Number of I/O Submission Queues: 64 00:10:41.934 Number of I/O Completion Queues: 64 00:10:41.934 00:10:41.934 ZNS Specific Controller Data 00:10:41.934 ============================ 00:10:41.934 Zone Append Size Limit: 0 00:10:41.934 00:10:41.934 00:10:41.934 Active Namespaces 00:10:41.934 ================= 00:10:41.934 Namespace ID:1 00:10:41.934 Error Recovery Timeout: Unlimited 00:10:41.934 Command Set Identifier: NVM (00h) 00:10:41.934 Deallocate: Supported 00:10:41.934 Deallocated/Unwritten Error: Supported 00:10:41.934 Deallocated Read Value: All 0x00 00:10:41.934 Deallocate in Write Zeroes: Not Supported 00:10:41.934 Deallocated Guard Field: 0xFFFF 00:10:41.934 Flush: Supported 00:10:41.934 Reservation: Not Supported 00:10:41.934 Namespace Sharing Capabilities: Private 00:10:41.934 Size (in LBAs): 1310720 (5GiB) 00:10:41.934 Capacity (in LBAs): 1310720 (5GiB) 00:10:41.934 Utilization (in LBAs): 1310720 (5GiB) 00:10:41.934 Thin Provisioning: Not Supported 00:10:41.934 Per-NS Atomic Units: No 00:10:41.934 Maximum Single Source Range Length: 128 00:10:41.934 Maximum Copy Length: 128 00:10:41.934 Maximum Source Range Count: 128 00:10:41.934 NGUID/EUI64 Never Reused: No 00:10:41.934 Namespace Write Protected: No 00:10:41.934 Number of LBA Formats: 8 00:10:41.934 Current LBA Format: LBA Format #04 00:10:41.934 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:41.934 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:41.934 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:41.934 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:41.934 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:41.934 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:41.934 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:41.934 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:41.934 00:10:41.934 ===================================================== 00:10:41.934 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:41.934 ===================================================== 00:10:41.934 Controller Capabilities/Features 00:10:41.934 ================================ 00:10:41.934 Vendor ID: 1b36 00:10:41.934 Subsystem Vendor ID: 1af4 00:10:41.934 Serial Number: 12343 00:10:41.934 Model Number: QEMU NVMe Ctrl 00:10:41.934 Firmware Version: 8.0.0 00:10:41.934 Recommended Arb Burst: 6 00:10:41.934 IEEE OUI Identifier: 00 54 52 00:10:41.934 Multi-path I/O 00:10:41.934 May have multiple subsystem ports: No 00:10:41.934 May have multiple controllers: Yes 00:10:41.934 Associated with SR-IOV VF: No 00:10:41.934 Max Data Transfer Size: 524288 00:10:41.934 Max Number of Namespaces: 256 00:10:41.934 Max Number of I/O Queues: 64 00:10:41.934 NVMe Specification Version (VS): 1.4 00:10:41.934 NVMe Specification Version (Identify): 1.4 00:10:41.934 Maximum Queue Entries: 2048 00:10:41.934 Contiguous Queues Required: Yes 00:10:41.934 Arbitration Mechanisms Supported 00:10:41.934 Weighted Round Robin: Not Supported 00:10:41.934 Vendor Specific: Not Supported 00:10:41.934 Reset Timeout: 7500 ms 00:10:41.934 Doorbell Stride: 4 bytes 00:10:41.934 NVM Subsystem Reset: Not Supported 00:10:41.934 Command Sets Supported 00:10:41.934 NVM Command Set: Supported 00:10:41.934 Boot Partition: Not Supported 00:10:41.934 Memory Page Size Minimum: 4096 bytes 00:10:41.934 Memory Page Size Maximum: 65536 bytes 00:10:41.934 Persistent Memory Region: Not Supported 00:10:41.934 Optional Asynchronous Events Supported 00:10:41.934 Namespace Attribute Notices: Supported 00:10:41.934 Firmware Activation Notices: Not Supported 00:10:41.934 ANA Change Notices: Not Supported 00:10:41.934 PLE Aggregate Log Change Notices: Not Supported 00:10:41.935 LBA Status Info Alert Notices: Not Supported 00:10:41.935 EGE Aggregate Log Change Notices: Not Supported 00:10:41.935 Normal NVM Subsystem Shutdown event: Not Supported 00:10:41.935 Zone Descriptor Change Notices: Not Supported 00:10:41.935 Discovery Log Change Notices: Not Supported 00:10:41.935 Controller Attributes 00:10:41.935 128-bit Host Identifier: Not Supported 00:10:41.935 Non-Operational Permissive Mode: Not Supported 00:10:41.935 NVM Sets: Not Supported 00:10:41.935 Read Recovery Levels: Not Supported 00:10:41.935 Endurance Groups: Supported 00:10:41.935 Predictable Latency Mode: Not Supported 00:10:41.935 Traffic Based Keep ALive: Not Supported 00:10:41.935 Namespace Granularity: Not Supported 00:10:41.935 SQ Associations: Not Supported 00:10:41.935 UUID List: Not Supported 00:10:41.935 Multi-Domain Subsystem: Not Supported 00:10:41.935 Fixed Capacity Management: Not Supported 00:10:41.935 Variable Capacity Management: Not Supported 00:10:41.935 Delete Endurance Group: Not Supported 00:10:41.935 Delete NVM Set: Not Supported 00:10:41.935 Extended LBA Formats Supported: Supported 00:10:41.935 Flexible Data Placement Supported: Supported 00:10:41.935 00:10:41.935 Controller Memory Buffer Support 00:10:41.935 ================================ 00:10:41.935 Supported: No 00:10:41.935 00:10:41.935 Persistent Memory Region Support 00:10:41.935 ================================ 00:10:41.935 Supported: No 00:10:41.935 00:10:41.935 Admin Command Set Attributes 00:10:41.935 ============================ 00:10:41.935 Security Send/Receive: Not Supported 00:10:41.935 Format NVM: Supported 00:10:41.935 Firmware Activate/Download: Not Supported 00:10:41.935 Namespace Management: Supported 00:10:41.935 Device Self-Test: Not Supported 00:10:41.935 Directives: Supported 00:10:41.935 NVMe-MI: Not Supported 00:10:41.935 Virtualization Management: Not Supported 00:10:41.935 Doorbell Buffer Config: Supported 00:10:41.935 Get LBA Status Capability: Not Supported 00:10:41.935 Command & Feature Lockdown Capability: Not Supported 00:10:41.935 Abort Command Limit: 4 00:10:41.935 Async Event Request Limit: 4 00:10:41.935 Number of Firmware Slots: N/A 00:10:41.935 Firmware Slot 1 Read-Only: N/A 00:10:41.935 Firmware Activation Without Reset: N/A 00:10:41.935 Multiple Update Detection Support: N/A 00:10:41.935 Firmware Update Granularity: No Information Provided 00:10:41.935 Per-Namespace SMART Log: Yes 00:10:41.935 Asymmetric Namespace Access Log Page: Not Supported 00:10:41.935 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:41.935 Command Effects Log Page: Supported 00:10:41.935 Get Log Page Extended Data: Supported 00:10:41.935 Telemetry Log Pages: Not Supported 00:10:41.935 Persistent Event Log Pages: Not Supported 00:10:41.935 Supported Log Pages Log Page: May Support 00:10:41.935 Commands Supported & Effects Log Page: Not Supported 00:10:41.935 Feature Identifiers & Effects Log Page:May Support 00:10:41.935 NVMe-MI Commands & Effects Log Page: May Support 00:10:41.935 Data Area 4 for Telemetry Log: Not Supported 00:10:41.935 Error Log Page Entries Supported: 1 00:10:41.935 Keep Alive: Not Supported 00:10:41.935 00:10:41.935 NVM Command Set Attributes 00:10:41.935 ========================== 00:10:41.935 Submission Queue Entry Size 00:10:41.935 Max: 64 00:10:41.935 Min: 64 00:10:41.935 Completion Queue Entry Size 00:10:41.935 Max: 16 00:10:41.935 Min: 16 00:10:41.935 Number of Namespaces: 256 00:10:41.935 Compare Command: Supported 00:10:41.935 Write Uncorrectable Command: Not Supported 00:10:41.935 Dataset Management Command: Supported 00:10:41.935 Write Zeroes Command: Supported 00:10:41.935 Set Features Save Field: Supported 00:10:41.935 Reservations: Not Supported 00:10:41.935 Timestamp: Supported 00:10:41.935 Copy: Supported 00:10:41.935 Volatile Write Cache: Present 00:10:41.935 Atomic Write Unit (Normal): 1 00:10:41.935 Atomic Write Unit (PFail): 1 00:10:41.935 Atomic Compare & Write Unit: 1 00:10:41.935 Fused Compare & Write: Not Supported 00:10:41.935 Scatter-Gather List 00:10:41.935 SGL Command Set: Supported 00:10:41.935 SGL Keyed: Not Supported 00:10:41.935 SGL Bit Bucket Descriptor: Not Supported 00:10:41.935 SGL Metadata Pointer: Not Supported 00:10:41.935 Oversized SGL: Not Supported 00:10:41.935 SGL Metadata Address: Not Supported 00:10:41.935 SGL Offset: Not Supported 00:10:41.935 Transport SGL Data Block: Not Supported 00:10:41.935 Replay Protected Memory Block: Not Supported 00:10:41.935 00:10:41.935 Firmware Slot Information 00:10:41.935 ========================= 00:10:41.935 Active slot: 1 00:10:41.935 Slot 1 Firmware Revision: 1.0 00:10:41.935 00:10:41.935 00:10:41.935 Commands Supported and Effects 00:10:41.935 ============================== 00:10:41.935 Admin Commands 00:10:41.935 -------------- 00:10:41.935 Delete I/O Submission Queue (00h): Supported 00:10:41.935 Create I/O Submission Queue (01h): Supported 00:10:41.935 Get Log Page (02h): Supported 00:10:41.935 Delete I/O Completion Queue (04h): Supported 00:10:41.935 Create I/O Completion Queue (05h): Supported 00:10:41.935 Identify (06h): Supported 00:10:41.935 Abort (08h): Supported 00:10:41.935 Set Features (09h): Supported 00:10:41.935 Get Features (0Ah): Supported 00:10:41.935 Asynchronous Event Request (0Ch): Supported 00:10:41.935 Namespace Attachment (15h): Supported N[2024-04-24 20:17:11.983370] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 69673 terminated unexpected 00:10:41.935 S-Inventory-Change 00:10:41.935 Directive Send (19h): Supported 00:10:41.935 Directive Receive (1Ah): Supported 00:10:41.935 Virtualization Management (1Ch): Supported 00:10:41.935 Doorbell Buffer Config (7Ch): Supported 00:10:41.935 Format NVM (80h): Supported LBA-Change 00:10:41.935 I/O Commands 00:10:41.935 ------------ 00:10:41.935 Flush (00h): Supported LBA-Change 00:10:41.935 Write (01h): Supported LBA-Change 00:10:41.935 Read (02h): Supported 00:10:41.935 Compare (05h): Supported 00:10:41.935 Write Zeroes (08h): Supported LBA-Change 00:10:41.935 Dataset Management (09h): Supported LBA-Change 00:10:41.935 Unknown (0Ch): Supported 00:10:41.935 Unknown (12h): Supported 00:10:41.935 Copy (19h): Supported LBA-Change 00:10:41.935 Unknown (1Dh): Supported LBA-Change 00:10:41.935 00:10:41.935 Error Log 00:10:41.935 ========= 00:10:41.935 00:10:41.935 Arbitration 00:10:41.935 =========== 00:10:41.935 Arbitration Burst: no limit 00:10:41.935 00:10:41.935 Power Management 00:10:41.935 ================ 00:10:41.935 Number of Power States: 1 00:10:41.935 Current Power State: Power State #0 00:10:41.935 Power State #0: 00:10:41.935 Max Power: 25.00 W 00:10:41.935 Non-Operational State: Operational 00:10:41.935 Entry Latency: 16 microseconds 00:10:41.935 Exit Latency: 4 microseconds 00:10:41.935 Relative Read Throughput: 0 00:10:41.935 Relative Read Latency: 0 00:10:41.935 Relative Write Throughput: 0 00:10:41.935 Relative Write Latency: 0 00:10:41.935 Idle Power: Not Reported 00:10:41.935 Active Power: Not Reported 00:10:41.935 Non-Operational Permissive Mode: Not Supported 00:10:41.935 00:10:41.935 Health Information 00:10:41.935 ================== 00:10:41.935 Critical Warnings: 00:10:41.935 Available Spare Space: OK 00:10:41.935 Temperature: OK 00:10:41.935 Device Reliability: OK 00:10:41.935 Read Only: No 00:10:41.935 Volatile Memory Backup: OK 00:10:41.935 Current Temperature: 323 Kelvin (50 Celsius) 00:10:41.935 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:41.935 Available Spare: 0% 00:10:41.935 Available Spare Threshold: 0% 00:10:41.935 Life Percentage Used: 0% 00:10:41.935 Data Units Read: 921 00:10:41.935 Data Units Written: 814 00:10:41.935 Host Read Commands: 39566 00:10:41.935 Host Write Commands: 38156 00:10:41.935 Controller Busy Time: 0 minutes 00:10:41.935 Power Cycles: 0 00:10:41.935 Power On Hours: 0 hours 00:10:41.935 Unsafe Shutdowns: 0 00:10:41.935 Unrecoverable Media Errors: 0 00:10:41.935 Lifetime Error Log Entries: 0 00:10:41.935 Warning Temperature Time: 0 minutes 00:10:41.935 Critical Temperature Time: 0 minutes 00:10:41.935 00:10:41.935 Number of Queues 00:10:41.935 ================ 00:10:41.935 Number of I/O Submission Queues: 64 00:10:41.935 Number of I/O Completion Queues: 64 00:10:41.935 00:10:41.935 ZNS Specific Controller Data 00:10:41.935 ============================ 00:10:41.935 Zone Append Size Limit: 0 00:10:41.935 00:10:41.935 00:10:41.935 Active Namespaces 00:10:41.935 ================= 00:10:41.935 Namespace ID:1 00:10:41.935 Error Recovery Timeout: Unlimited 00:10:41.935 Command Set Identifier: NVM (00h) 00:10:41.935 Deallocate: Supported 00:10:41.935 Deallocated/Unwritten Error: Supported 00:10:41.935 Deallocated Read Value: All 0x00 00:10:41.935 Deallocate in Write Zeroes: Not Supported 00:10:41.935 Deallocated Guard Field: 0xFFFF 00:10:41.935 Flush: Supported 00:10:41.935 Reservation: Not Supported 00:10:41.936 Namespace Sharing Capabilities: Multiple Controllers 00:10:41.936 Size (in LBAs): 262144 (1GiB) 00:10:41.936 Capacity (in LBAs): 262144 (1GiB) 00:10:41.936 Utilization (in LBAs): 262144 (1GiB) 00:10:41.936 Thin Provisioning: Not Supported 00:10:41.936 Per-NS Atomic Units: No 00:10:41.936 Maximum Single Source Range Length: 128 00:10:41.936 Maximum Copy Length: 128 00:10:41.936 Maximum Source Range Count: 128 00:10:41.936 NGUID/EUI64 Never Reused: No 00:10:41.936 Namespace Write Protected: No 00:10:41.936 Endurance group ID: 1 00:10:41.936 Number of LBA Formats: 8 00:10:41.936 Current LBA Format: LBA Format #04 00:10:41.936 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:41.936 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:41.936 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:41.936 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:41.936 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:41.936 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:41.936 LBA Format #06: Data Siz[2024-04-24 20:17:11.985837] nvme_ctrlr.c:3484:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 69673 terminated unexpected 00:10:41.936 e: 4096 Metadata Size: 16 00:10:41.936 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:41.936 00:10:41.936 Get Feature FDP: 00:10:41.936 ================ 00:10:41.936 Enabled: Yes 00:10:41.936 FDP configuration index: 0 00:10:41.936 00:10:41.936 FDP configurations log page 00:10:41.936 =========================== 00:10:41.936 Number of FDP configurations: 1 00:10:41.936 Version: 0 00:10:41.936 Size: 112 00:10:41.936 FDP Configuration Descriptor: 0 00:10:41.936 Descriptor Size: 96 00:10:41.936 Reclaim Group Identifier format: 2 00:10:41.936 FDP Volatile Write Cache: Not Present 00:10:41.936 FDP Configuration: Valid 00:10:41.936 Vendor Specific Size: 0 00:10:41.936 Number of Reclaim Groups: 2 00:10:41.936 Number of Recalim Unit Handles: 8 00:10:41.936 Max Placement Identifiers: 128 00:10:41.936 Number of Namespaces Suppprted: 256 00:10:41.936 Reclaim unit Nominal Size: 6000000 bytes 00:10:41.936 Estimated Reclaim Unit Time Limit: Not Reported 00:10:41.936 RUH Desc #000: RUH Type: Initially Isolated 00:10:41.936 RUH Desc #001: RUH Type: Initially Isolated 00:10:41.936 RUH Desc #002: RUH Type: Initially Isolated 00:10:41.936 RUH Desc #003: RUH Type: Initially Isolated 00:10:41.936 RUH Desc #004: RUH Type: Initially Isolated 00:10:41.936 RUH Desc #005: RUH Type: Initially Isolated 00:10:41.936 RUH Desc #006: RUH Type: Initially Isolated 00:10:41.936 RUH Desc #007: RUH Type: Initially Isolated 00:10:41.936 00:10:41.936 FDP reclaim unit handle usage log page 00:10:41.936 ====================================== 00:10:41.936 Number of Reclaim Unit Handles: 8 00:10:41.936 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:41.936 RUH Usage Desc #001: RUH Attributes: Unused 00:10:41.936 RUH Usage Desc #002: RUH Attributes: Unused 00:10:41.936 RUH Usage Desc #003: RUH Attributes: Unused 00:10:41.936 RUH Usage Desc #004: RUH Attributes: Unused 00:10:41.936 RUH Usage Desc #005: RUH Attributes: Unused 00:10:41.936 RUH Usage Desc #006: RUH Attributes: Unused 00:10:41.936 RUH Usage Desc #007: RUH Attributes: Unused 00:10:41.936 00:10:41.936 FDP statistics log page 00:10:41.936 ======================= 00:10:41.936 Host bytes with metadata written: 516005888 00:10:41.936 Media bytes with metadata written: 516063232 00:10:41.936 Media bytes erased: 0 00:10:41.936 00:10:41.936 FDP events log page 00:10:41.936 =================== 00:10:41.936 Number of FDP events: 0 00:10:41.936 00:10:41.936 ===================================================== 00:10:41.936 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:41.936 ===================================================== 00:10:41.936 Controller Capabilities/Features 00:10:41.936 ================================ 00:10:41.936 Vendor ID: 1b36 00:10:41.936 Subsystem Vendor ID: 1af4 00:10:41.936 Serial Number: 12342 00:10:41.936 Model Number: QEMU NVMe Ctrl 00:10:41.936 Firmware Version: 8.0.0 00:10:41.936 Recommended Arb Burst: 6 00:10:41.936 IEEE OUI Identifier: 00 54 52 00:10:41.936 Multi-path I/O 00:10:41.936 May have multiple subsystem ports: No 00:10:41.936 May have multiple controllers: No 00:10:41.936 Associated with SR-IOV VF: No 00:10:41.936 Max Data Transfer Size: 524288 00:10:41.936 Max Number of Namespaces: 256 00:10:41.936 Max Number of I/O Queues: 64 00:10:41.936 NVMe Specification Version (VS): 1.4 00:10:41.936 NVMe Specification Version (Identify): 1.4 00:10:41.936 Maximum Queue Entries: 2048 00:10:41.936 Contiguous Queues Required: Yes 00:10:41.936 Arbitration Mechanisms Supported 00:10:41.936 Weighted Round Robin: Not Supported 00:10:41.936 Vendor Specific: Not Supported 00:10:41.936 Reset Timeout: 7500 ms 00:10:41.936 Doorbell Stride: 4 bytes 00:10:41.936 NVM Subsystem Reset: Not Supported 00:10:41.936 Command Sets Supported 00:10:41.936 NVM Command Set: Supported 00:10:41.936 Boot Partition: Not Supported 00:10:41.936 Memory Page Size Minimum: 4096 bytes 00:10:41.936 Memory Page Size Maximum: 65536 bytes 00:10:41.936 Persistent Memory Region: Not Supported 00:10:41.936 Optional Asynchronous Events Supported 00:10:41.936 Namespace Attribute Notices: Supported 00:10:41.936 Firmware Activation Notices: Not Supported 00:10:41.936 ANA Change Notices: Not Supported 00:10:41.936 PLE Aggregate Log Change Notices: Not Supported 00:10:41.936 LBA Status Info Alert Notices: Not Supported 00:10:41.936 EGE Aggregate Log Change Notices: Not Supported 00:10:41.936 Normal NVM Subsystem Shutdown event: Not Supported 00:10:41.936 Zone Descriptor Change Notices: Not Supported 00:10:41.936 Discovery Log Change Notices: Not Supported 00:10:41.936 Controller Attributes 00:10:41.936 128-bit Host Identifier: Not Supported 00:10:41.936 Non-Operational Permissive Mode: Not Supported 00:10:41.936 NVM Sets: Not Supported 00:10:41.936 Read Recovery Levels: Not Supported 00:10:41.936 Endurance Groups: Not Supported 00:10:41.936 Predictable Latency Mode: Not Supported 00:10:41.936 Traffic Based Keep ALive: Not Supported 00:10:41.936 Namespace Granularity: Not Supported 00:10:41.936 SQ Associations: Not Supported 00:10:41.936 UUID List: Not Supported 00:10:41.936 Multi-Domain Subsystem: Not Supported 00:10:41.936 Fixed Capacity Management: Not Supported 00:10:41.936 Variable Capacity Management: Not Supported 00:10:41.936 Delete Endurance Group: Not Supported 00:10:41.936 Delete NVM Set: Not Supported 00:10:41.936 Extended LBA Formats Supported: Supported 00:10:41.936 Flexible Data Placement Supported: Not Supported 00:10:41.936 00:10:41.936 Controller Memory Buffer Support 00:10:41.936 ================================ 00:10:41.936 Supported: No 00:10:41.936 00:10:41.936 Persistent Memory Region Support 00:10:41.936 ================================ 00:10:41.936 Supported: No 00:10:41.936 00:10:41.936 Admin Command Set Attributes 00:10:41.936 ============================ 00:10:41.936 Security Send/Receive: Not Supported 00:10:41.936 Format NVM: Supported 00:10:41.936 Firmware Activate/Download: Not Supported 00:10:41.936 Namespace Management: Supported 00:10:41.936 Device Self-Test: Not Supported 00:10:41.936 Directives: Supported 00:10:41.936 NVMe-MI: Not Supported 00:10:41.936 Virtualization Management: Not Supported 00:10:41.936 Doorbell Buffer Config: Supported 00:10:41.936 Get LBA Status Capability: Not Supported 00:10:41.936 Command & Feature Lockdown Capability: Not Supported 00:10:41.936 Abort Command Limit: 4 00:10:41.936 Async Event Request Limit: 4 00:10:41.936 Number of Firmware Slots: N/A 00:10:41.936 Firmware Slot 1 Read-Only: N/A 00:10:41.937 Firmware Activation Without Reset: N/A 00:10:41.937 Multiple Update Detection Support: N/A 00:10:41.937 Firmware Update Granularity: No Information Provided 00:10:41.937 Per-Namespace SMART Log: Yes 00:10:41.937 Asymmetric Namespace Access Log Page: Not Supported 00:10:41.937 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:41.937 Command Effects Log Page: Supported 00:10:41.937 Get Log Page Extended Data: Supported 00:10:41.937 Telemetry Log Pages: Not Supported 00:10:41.937 Persistent Event Log Pages: Not Supported 00:10:41.937 Supported Log Pages Log Page: May Support 00:10:41.937 Commands Supported & Effects Log Page: Not Supported 00:10:41.937 Feature Identifiers & Effects Log Page:May Support 00:10:41.937 NVMe-MI Commands & Effects Log Page: May Support 00:10:41.937 Data Area 4 for Telemetry Log: Not Supported 00:10:41.937 Error Log Page Entries Supported: 1 00:10:41.937 Keep Alive: Not Supported 00:10:41.937 00:10:41.937 NVM Command Set Attributes 00:10:41.937 ========================== 00:10:41.937 Submission Queue Entry Size 00:10:41.937 Max: 64 00:10:41.937 Min: 64 00:10:41.937 Completion Queue Entry Size 00:10:41.937 Max: 16 00:10:41.937 Min: 16 00:10:41.937 Number of Namespaces: 256 00:10:41.937 Compare Command: Supported 00:10:41.937 Write Uncorrectable Command: Not Supported 00:10:41.937 Dataset Management Command: Supported 00:10:41.937 Write Zeroes Command: Supported 00:10:41.937 Set Features Save Field: Supported 00:10:41.937 Reservations: Not Supported 00:10:41.937 Timestamp: Supported 00:10:41.937 Copy: Supported 00:10:41.937 Volatile Write Cache: Present 00:10:41.937 Atomic Write Unit (Normal): 1 00:10:41.937 Atomic Write Unit (PFail): 1 00:10:41.937 Atomic Compare & Write Unit: 1 00:10:41.937 Fused Compare & Write: Not Supported 00:10:41.937 Scatter-Gather List 00:10:41.937 SGL Command Set: Supported 00:10:41.937 SGL Keyed: Not Supported 00:10:41.937 SGL Bit Bucket Descriptor: Not Supported 00:10:41.937 SGL Metadata Pointer: Not Supported 00:10:41.937 Oversized SGL: Not Supported 00:10:41.937 SGL Metadata Address: Not Supported 00:10:41.937 SGL Offset: Not Supported 00:10:41.937 Transport SGL Data Block: Not Supported 00:10:41.937 Replay Protected Memory Block: Not Supported 00:10:41.937 00:10:41.937 Firmware Slot Information 00:10:41.937 ========================= 00:10:41.937 Active slot: 1 00:10:41.937 Slot 1 Firmware Revision: 1.0 00:10:41.937 00:10:41.937 00:10:41.937 Commands Supported and Effects 00:10:41.937 ============================== 00:10:41.937 Admin Commands 00:10:41.937 -------------- 00:10:41.937 Delete I/O Submission Queue (00h): Supported 00:10:41.937 Create I/O Submission Queue (01h): Supported 00:10:41.937 Get Log Page (02h): Supported 00:10:41.937 Delete I/O Completion Queue (04h): Supported 00:10:41.937 Create I/O Completion Queue (05h): Supported 00:10:41.937 Identify (06h): Supported 00:10:41.937 Abort (08h): Supported 00:10:41.937 Set Features (09h): Supported 00:10:41.937 Get Features (0Ah): Supported 00:10:41.937 Asynchronous Event Request (0Ch): Supported 00:10:41.937 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:41.937 Directive Send (19h): Supported 00:10:41.937 Directive Receive (1Ah): Supported 00:10:41.937 Virtualization Management (1Ch): Supported 00:10:41.937 Doorbell Buffer Config (7Ch): Supported 00:10:41.937 Format NVM (80h): Supported LBA-Change 00:10:41.937 I/O Commands 00:10:41.937 ------------ 00:10:41.937 Flush (00h): Supported LBA-Change 00:10:41.937 Write (01h): Supported LBA-Change 00:10:41.937 Read (02h): Supported 00:10:41.937 Compare (05h): Supported 00:10:41.937 Write Zeroes (08h): Supported LBA-Change 00:10:41.937 Dataset Management (09h): Supported LBA-Change 00:10:41.937 Unknown (0Ch): Supported 00:10:41.937 Unknown (12h): Supported 00:10:41.937 Copy (19h): Supported LBA-Change 00:10:41.937 Unknown (1Dh): Supported LBA-Change 00:10:41.937 00:10:41.937 Error Log 00:10:41.937 ========= 00:10:41.937 00:10:41.937 Arbitration 00:10:41.937 =========== 00:10:41.937 Arbitration Burst: no limit 00:10:41.937 00:10:41.937 Power Management 00:10:41.937 ================ 00:10:41.937 Number of Power States: 1 00:10:41.937 Current Power State: Power State #0 00:10:41.937 Power State #0: 00:10:41.937 Max Power: 25.00 W 00:10:41.937 Non-Operational State: Operational 00:10:41.937 Entry Latency: 16 microseconds 00:10:41.937 Exit Latency: 4 microseconds 00:10:41.937 Relative Read Throughput: 0 00:10:41.937 Relative Read Latency: 0 00:10:41.937 Relative Write Throughput: 0 00:10:41.937 Relative Write Latency: 0 00:10:41.937 Idle Power: Not Reported 00:10:41.937 Active Power: Not Reported 00:10:41.937 Non-Operational Permissive Mode: Not Supported 00:10:41.937 00:10:41.937 Health Information 00:10:41.937 ================== 00:10:41.937 Critical Warnings: 00:10:41.937 Available Spare Space: OK 00:10:41.937 Temperature: OK 00:10:41.937 Device Reliability: OK 00:10:41.937 Read Only: No 00:10:41.937 Volatile Memory Backup: OK 00:10:41.937 Current Temperature: 323 Kelvin (50 Celsius) 00:10:41.937 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:41.937 Available Spare: 0% 00:10:41.937 Available Spare Threshold: 0% 00:10:41.937 Life Percentage Used: 0% 00:10:41.937 Data Units Read: 2551 00:10:41.937 Data Units Written: 2231 00:10:41.937 Host Read Commands: 116813 00:10:41.937 Host Write Commands: 112583 00:10:41.937 Controller Busy Time: 0 minutes 00:10:41.937 Power Cycles: 0 00:10:41.937 Power On Hours: 0 hours 00:10:41.937 Unsafe Shutdowns: 0 00:10:41.937 Unrecoverable Media Errors: 0 00:10:41.937 Lifetime Error Log Entries: 0 00:10:41.937 Warning Temperature Time: 0 minutes 00:10:41.937 Critical Temperature Time: 0 minutes 00:10:41.937 00:10:41.937 Number of Queues 00:10:41.937 ================ 00:10:41.937 Number of I/O Submission Queues: 64 00:10:41.937 Number of I/O Completion Queues: 64 00:10:41.937 00:10:41.937 ZNS Specific Controller Data 00:10:41.937 ============================ 00:10:41.937 Zone Append Size Limit: 0 00:10:41.937 00:10:41.937 00:10:41.937 Active Namespaces 00:10:41.937 ================= 00:10:41.937 Namespace ID:1 00:10:41.937 Error Recovery Timeout: Unlimited 00:10:41.937 Command Set Identifier: NVM (00h) 00:10:41.937 Deallocate: Supported 00:10:41.937 Deallocated/Unwritten Error: Supported 00:10:41.937 Deallocated Read Value: All 0x00 00:10:41.937 Deallocate in Write Zeroes: Not Supported 00:10:41.937 Deallocated Guard Field: 0xFFFF 00:10:41.937 Flush: Supported 00:10:41.937 Reservation: Not Supported 00:10:41.937 Namespace Sharing Capabilities: Private 00:10:41.937 Size (in LBAs): 1048576 (4GiB) 00:10:41.937 Capacity (in LBAs): 1048576 (4GiB) 00:10:41.937 Utilization (in LBAs): 1048576 (4GiB) 00:10:41.937 Thin Provisioning: Not Supported 00:10:41.937 Per-NS Atomic Units: No 00:10:41.937 Maximum Single Source Range Length: 128 00:10:41.937 Maximum Copy Length: 128 00:10:41.937 Maximum Source Range Count: 128 00:10:41.937 NGUID/EUI64 Never Reused: No 00:10:41.937 Namespace Write Protected: No 00:10:41.937 Number of LBA Formats: 8 00:10:41.937 Current LBA Format: LBA Format #04 00:10:41.937 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:41.937 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:41.937 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:41.937 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:41.937 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:41.937 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:41.937 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:41.937 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:41.937 00:10:41.937 Namespace ID:2 00:10:41.937 Error Recovery Timeout: Unlimited 00:10:41.937 Command Set Identifier: NVM (00h) 00:10:41.937 Deallocate: Supported 00:10:41.937 Deallocated/Unwritten Error: Supported 00:10:41.937 Deallocated Read Value: All 0x00 00:10:41.937 Deallocate in Write Zeroes: Not Supported 00:10:41.937 Deallocated Guard Field: 0xFFFF 00:10:41.937 Flush: Supported 00:10:41.937 Reservation: Not Supported 00:10:41.937 Namespace Sharing Capabilities: Private 00:10:41.937 Size (in LBAs): 1048576 (4GiB) 00:10:41.937 Capacity (in LBAs): 1048576 (4GiB) 00:10:41.937 Utilization (in LBAs): 1048576 (4GiB) 00:10:41.937 Thin Provisioning: Not Supported 00:10:41.937 Per-NS Atomic Units: No 00:10:41.937 Maximum Single Source Range Length: 128 00:10:41.937 Maximum Copy Length: 128 00:10:41.937 Maximum Source Range Count: 128 00:10:41.938 NGUID/EUI64 Never Reused: No 00:10:41.938 Namespace Write Protected: No 00:10:41.938 Number of LBA Formats: 8 00:10:41.938 Current LBA Format: LBA Format #04 00:10:41.938 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:41.938 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:41.938 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:41.938 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:41.938 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:41.938 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:41.938 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:41.938 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:41.938 00:10:41.938 Namespace ID:3 00:10:41.938 Error Recovery Timeout: Unlimited 00:10:41.938 Command Set Identifier: NVM (00h) 00:10:41.938 Deallocate: Supported 00:10:41.938 Deallocated/Unwritten Error: Supported 00:10:41.938 Deallocated Read Value: All 0x00 00:10:41.938 Deallocate in Write Zeroes: Not Supported 00:10:41.938 Deallocated Guard Field: 0xFFFF 00:10:41.938 Flush: Supported 00:10:41.938 Reservation: Not Supported 00:10:41.938 Namespace Sharing Capabilities: Private 00:10:41.938 Size (in LBAs): 1048576 (4GiB) 00:10:41.938 Capacity (in LBAs): 1048576 (4GiB) 00:10:41.938 Utilization (in LBAs): 1048576 (4GiB) 00:10:41.938 Thin Provisioning: Not Supported 00:10:41.938 Per-NS Atomic Units: No 00:10:41.938 Maximum Single Source Range Length: 128 00:10:41.938 Maximum Copy Length: 128 00:10:41.938 Maximum Source Range Count: 128 00:10:41.938 NGUID/EUI64 Never Reused: No 00:10:41.938 Namespace Write Protected: No 00:10:41.938 Number of LBA Formats: 8 00:10:41.938 Current LBA Format: LBA Format #04 00:10:41.938 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:41.938 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:41.938 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:41.938 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:41.938 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:41.938 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:41.938 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:41.938 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:41.938 00:10:41.938 20:17:12 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:41.938 20:17:12 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:10:42.198 ===================================================== 00:10:42.198 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:42.198 ===================================================== 00:10:42.198 Controller Capabilities/Features 00:10:42.198 ================================ 00:10:42.198 Vendor ID: 1b36 00:10:42.198 Subsystem Vendor ID: 1af4 00:10:42.198 Serial Number: 12340 00:10:42.198 Model Number: QEMU NVMe Ctrl 00:10:42.198 Firmware Version: 8.0.0 00:10:42.198 Recommended Arb Burst: 6 00:10:42.198 IEEE OUI Identifier: 00 54 52 00:10:42.198 Multi-path I/O 00:10:42.198 May have multiple subsystem ports: No 00:10:42.198 May have multiple controllers: No 00:10:42.198 Associated with SR-IOV VF: No 00:10:42.198 Max Data Transfer Size: 524288 00:10:42.198 Max Number of Namespaces: 256 00:10:42.198 Max Number of I/O Queues: 64 00:10:42.198 NVMe Specification Version (VS): 1.4 00:10:42.198 NVMe Specification Version (Identify): 1.4 00:10:42.198 Maximum Queue Entries: 2048 00:10:42.198 Contiguous Queues Required: Yes 00:10:42.198 Arbitration Mechanisms Supported 00:10:42.198 Weighted Round Robin: Not Supported 00:10:42.198 Vendor Specific: Not Supported 00:10:42.198 Reset Timeout: 7500 ms 00:10:42.198 Doorbell Stride: 4 bytes 00:10:42.198 NVM Subsystem Reset: Not Supported 00:10:42.198 Command Sets Supported 00:10:42.198 NVM Command Set: Supported 00:10:42.198 Boot Partition: Not Supported 00:10:42.198 Memory Page Size Minimum: 4096 bytes 00:10:42.198 Memory Page Size Maximum: 65536 bytes 00:10:42.198 Persistent Memory Region: Not Supported 00:10:42.198 Optional Asynchronous Events Supported 00:10:42.198 Namespace Attribute Notices: Supported 00:10:42.198 Firmware Activation Notices: Not Supported 00:10:42.198 ANA Change Notices: Not Supported 00:10:42.198 PLE Aggregate Log Change Notices: Not Supported 00:10:42.198 LBA Status Info Alert Notices: Not Supported 00:10:42.198 EGE Aggregate Log Change Notices: Not Supported 00:10:42.198 Normal NVM Subsystem Shutdown event: Not Supported 00:10:42.198 Zone Descriptor Change Notices: Not Supported 00:10:42.198 Discovery Log Change Notices: Not Supported 00:10:42.198 Controller Attributes 00:10:42.198 128-bit Host Identifier: Not Supported 00:10:42.198 Non-Operational Permissive Mode: Not Supported 00:10:42.198 NVM Sets: Not Supported 00:10:42.198 Read Recovery Levels: Not Supported 00:10:42.198 Endurance Groups: Not Supported 00:10:42.198 Predictable Latency Mode: Not Supported 00:10:42.198 Traffic Based Keep ALive: Not Supported 00:10:42.198 Namespace Granularity: Not Supported 00:10:42.198 SQ Associations: Not Supported 00:10:42.198 UUID List: Not Supported 00:10:42.198 Multi-Domain Subsystem: Not Supported 00:10:42.198 Fixed Capacity Management: Not Supported 00:10:42.198 Variable Capacity Management: Not Supported 00:10:42.198 Delete Endurance Group: Not Supported 00:10:42.198 Delete NVM Set: Not Supported 00:10:42.198 Extended LBA Formats Supported: Supported 00:10:42.198 Flexible Data Placement Supported: Not Supported 00:10:42.198 00:10:42.198 Controller Memory Buffer Support 00:10:42.198 ================================ 00:10:42.198 Supported: No 00:10:42.198 00:10:42.198 Persistent Memory Region Support 00:10:42.198 ================================ 00:10:42.198 Supported: No 00:10:42.198 00:10:42.198 Admin Command Set Attributes 00:10:42.198 ============================ 00:10:42.198 Security Send/Receive: Not Supported 00:10:42.198 Format NVM: Supported 00:10:42.198 Firmware Activate/Download: Not Supported 00:10:42.198 Namespace Management: Supported 00:10:42.198 Device Self-Test: Not Supported 00:10:42.198 Directives: Supported 00:10:42.198 NVMe-MI: Not Supported 00:10:42.198 Virtualization Management: Not Supported 00:10:42.198 Doorbell Buffer Config: Supported 00:10:42.198 Get LBA Status Capability: Not Supported 00:10:42.198 Command & Feature Lockdown Capability: Not Supported 00:10:42.198 Abort Command Limit: 4 00:10:42.198 Async Event Request Limit: 4 00:10:42.198 Number of Firmware Slots: N/A 00:10:42.198 Firmware Slot 1 Read-Only: N/A 00:10:42.198 Firmware Activation Without Reset: N/A 00:10:42.198 Multiple Update Detection Support: N/A 00:10:42.198 Firmware Update Granularity: No Information Provided 00:10:42.198 Per-Namespace SMART Log: Yes 00:10:42.198 Asymmetric Namespace Access Log Page: Not Supported 00:10:42.198 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:10:42.198 Command Effects Log Page: Supported 00:10:42.198 Get Log Page Extended Data: Supported 00:10:42.198 Telemetry Log Pages: Not Supported 00:10:42.198 Persistent Event Log Pages: Not Supported 00:10:42.198 Supported Log Pages Log Page: May Support 00:10:42.198 Commands Supported & Effects Log Page: Not Supported 00:10:42.198 Feature Identifiers & Effects Log Page:May Support 00:10:42.198 NVMe-MI Commands & Effects Log Page: May Support 00:10:42.198 Data Area 4 for Telemetry Log: Not Supported 00:10:42.198 Error Log Page Entries Supported: 1 00:10:42.198 Keep Alive: Not Supported 00:10:42.198 00:10:42.198 NVM Command Set Attributes 00:10:42.198 ========================== 00:10:42.198 Submission Queue Entry Size 00:10:42.198 Max: 64 00:10:42.198 Min: 64 00:10:42.198 Completion Queue Entry Size 00:10:42.198 Max: 16 00:10:42.198 Min: 16 00:10:42.198 Number of Namespaces: 256 00:10:42.198 Compare Command: Supported 00:10:42.198 Write Uncorrectable Command: Not Supported 00:10:42.198 Dataset Management Command: Supported 00:10:42.198 Write Zeroes Command: Supported 00:10:42.198 Set Features Save Field: Supported 00:10:42.198 Reservations: Not Supported 00:10:42.198 Timestamp: Supported 00:10:42.198 Copy: Supported 00:10:42.198 Volatile Write Cache: Present 00:10:42.198 Atomic Write Unit (Normal): 1 00:10:42.198 Atomic Write Unit (PFail): 1 00:10:42.198 Atomic Compare & Write Unit: 1 00:10:42.198 Fused Compare & Write: Not Supported 00:10:42.198 Scatter-Gather List 00:10:42.198 SGL Command Set: Supported 00:10:42.198 SGL Keyed: Not Supported 00:10:42.198 SGL Bit Bucket Descriptor: Not Supported 00:10:42.198 SGL Metadata Pointer: Not Supported 00:10:42.198 Oversized SGL: Not Supported 00:10:42.198 SGL Metadata Address: Not Supported 00:10:42.198 SGL Offset: Not Supported 00:10:42.198 Transport SGL Data Block: Not Supported 00:10:42.198 Replay Protected Memory Block: Not Supported 00:10:42.198 00:10:42.198 Firmware Slot Information 00:10:42.198 ========================= 00:10:42.198 Active slot: 1 00:10:42.198 Slot 1 Firmware Revision: 1.0 00:10:42.198 00:10:42.198 00:10:42.198 Commands Supported and Effects 00:10:42.198 ============================== 00:10:42.198 Admin Commands 00:10:42.198 -------------- 00:10:42.198 Delete I/O Submission Queue (00h): Supported 00:10:42.198 Create I/O Submission Queue (01h): Supported 00:10:42.198 Get Log Page (02h): Supported 00:10:42.198 Delete I/O Completion Queue (04h): Supported 00:10:42.198 Create I/O Completion Queue (05h): Supported 00:10:42.198 Identify (06h): Supported 00:10:42.198 Abort (08h): Supported 00:10:42.198 Set Features (09h): Supported 00:10:42.198 Get Features (0Ah): Supported 00:10:42.198 Asynchronous Event Request (0Ch): Supported 00:10:42.198 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:42.198 Directive Send (19h): Supported 00:10:42.198 Directive Receive (1Ah): Supported 00:10:42.198 Virtualization Management (1Ch): Supported 00:10:42.198 Doorbell Buffer Config (7Ch): Supported 00:10:42.198 Format NVM (80h): Supported LBA-Change 00:10:42.198 I/O Commands 00:10:42.198 ------------ 00:10:42.198 Flush (00h): Supported LBA-Change 00:10:42.198 Write (01h): Supported LBA-Change 00:10:42.198 Read (02h): Supported 00:10:42.198 Compare (05h): Supported 00:10:42.198 Write Zeroes (08h): Supported LBA-Change 00:10:42.198 Dataset Management (09h): Supported LBA-Change 00:10:42.198 Unknown (0Ch): Supported 00:10:42.198 Unknown (12h): Supported 00:10:42.198 Copy (19h): Supported LBA-Change 00:10:42.198 Unknown (1Dh): Supported LBA-Change 00:10:42.198 00:10:42.198 Error Log 00:10:42.198 ========= 00:10:42.198 00:10:42.198 Arbitration 00:10:42.198 =========== 00:10:42.198 Arbitration Burst: no limit 00:10:42.198 00:10:42.198 Power Management 00:10:42.198 ================ 00:10:42.198 Number of Power States: 1 00:10:42.198 Current Power State: Power State #0 00:10:42.198 Power State #0: 00:10:42.199 Max Power: 25.00 W 00:10:42.199 Non-Operational State: Operational 00:10:42.199 Entry Latency: 16 microseconds 00:10:42.199 Exit Latency: 4 microseconds 00:10:42.199 Relative Read Throughput: 0 00:10:42.199 Relative Read Latency: 0 00:10:42.199 Relative Write Throughput: 0 00:10:42.199 Relative Write Latency: 0 00:10:42.199 Idle Power: Not Reported 00:10:42.199 Active Power: Not Reported 00:10:42.199 Non-Operational Permissive Mode: Not Supported 00:10:42.199 00:10:42.199 Health Information 00:10:42.199 ================== 00:10:42.199 Critical Warnings: 00:10:42.199 Available Spare Space: OK 00:10:42.199 Temperature: OK 00:10:42.199 Device Reliability: OK 00:10:42.199 Read Only: No 00:10:42.199 Volatile Memory Backup: OK 00:10:42.199 Current Temperature: 323 Kelvin (50 Celsius) 00:10:42.199 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:42.199 Available Spare: 0% 00:10:42.199 Available Spare Threshold: 0% 00:10:42.199 Life Percentage Used: 0% 00:10:42.199 Data Units Read: 1173 00:10:42.199 Data Units Written: 1005 00:10:42.199 Host Read Commands: 54933 00:10:42.199 Host Write Commands: 53401 00:10:42.199 Controller Busy Time: 0 minutes 00:10:42.199 Power Cycles: 0 00:10:42.199 Power On Hours: 0 hours 00:10:42.199 Unsafe Shutdowns: 0 00:10:42.199 Unrecoverable Media Errors: 0 00:10:42.199 Lifetime Error Log Entries: 0 00:10:42.199 Warning Temperature Time: 0 minutes 00:10:42.199 Critical Temperature Time: 0 minutes 00:10:42.199 00:10:42.199 Number of Queues 00:10:42.199 ================ 00:10:42.199 Number of I/O Submission Queues: 64 00:10:42.199 Number of I/O Completion Queues: 64 00:10:42.199 00:10:42.199 ZNS Specific Controller Data 00:10:42.199 ============================ 00:10:42.199 Zone Append Size Limit: 0 00:10:42.199 00:10:42.199 00:10:42.199 Active Namespaces 00:10:42.199 ================= 00:10:42.199 Namespace ID:1 00:10:42.199 Error Recovery Timeout: Unlimited 00:10:42.199 Command Set Identifier: NVM (00h) 00:10:42.199 Deallocate: Supported 00:10:42.199 Deallocated/Unwritten Error: Supported 00:10:42.199 Deallocated Read Value: All 0x00 00:10:42.199 Deallocate in Write Zeroes: Not Supported 00:10:42.199 Deallocated Guard Field: 0xFFFF 00:10:42.199 Flush: Supported 00:10:42.199 Reservation: Not Supported 00:10:42.199 Metadata Transferred as: Separate Metadata Buffer 00:10:42.199 Namespace Sharing Capabilities: Private 00:10:42.199 Size (in LBAs): 1548666 (5GiB) 00:10:42.199 Capacity (in LBAs): 1548666 (5GiB) 00:10:42.199 Utilization (in LBAs): 1548666 (5GiB) 00:10:42.199 Thin Provisioning: Not Supported 00:10:42.199 Per-NS Atomic Units: No 00:10:42.199 Maximum Single Source Range Length: 128 00:10:42.199 Maximum Copy Length: 128 00:10:42.199 Maximum Source Range Count: 128 00:10:42.199 NGUID/EUI64 Never Reused: No 00:10:42.199 Namespace Write Protected: No 00:10:42.199 Number of LBA Formats: 8 00:10:42.199 Current LBA Format: LBA Format #07 00:10:42.199 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:42.199 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:42.199 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:42.199 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:42.199 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:42.199 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:42.199 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:42.199 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:42.199 00:10:42.199 20:17:12 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:42.199 20:17:12 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:10:42.458 ===================================================== 00:10:42.458 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:42.458 ===================================================== 00:10:42.458 Controller Capabilities/Features 00:10:42.458 ================================ 00:10:42.458 Vendor ID: 1b36 00:10:42.458 Subsystem Vendor ID: 1af4 00:10:42.458 Serial Number: 12341 00:10:42.458 Model Number: QEMU NVMe Ctrl 00:10:42.458 Firmware Version: 8.0.0 00:10:42.458 Recommended Arb Burst: 6 00:10:42.458 IEEE OUI Identifier: 00 54 52 00:10:42.458 Multi-path I/O 00:10:42.458 May have multiple subsystem ports: No 00:10:42.458 May have multiple controllers: No 00:10:42.458 Associated with SR-IOV VF: No 00:10:42.458 Max Data Transfer Size: 524288 00:10:42.458 Max Number of Namespaces: 256 00:10:42.458 Max Number of I/O Queues: 64 00:10:42.458 NVMe Specification Version (VS): 1.4 00:10:42.458 NVMe Specification Version (Identify): 1.4 00:10:42.458 Maximum Queue Entries: 2048 00:10:42.458 Contiguous Queues Required: Yes 00:10:42.458 Arbitration Mechanisms Supported 00:10:42.458 Weighted Round Robin: Not Supported 00:10:42.458 Vendor Specific: Not Supported 00:10:42.458 Reset Timeout: 7500 ms 00:10:42.458 Doorbell Stride: 4 bytes 00:10:42.458 NVM Subsystem Reset: Not Supported 00:10:42.458 Command Sets Supported 00:10:42.458 NVM Command Set: Supported 00:10:42.458 Boot Partition: Not Supported 00:10:42.458 Memory Page Size Minimum: 4096 bytes 00:10:42.458 Memory Page Size Maximum: 65536 bytes 00:10:42.458 Persistent Memory Region: Not Supported 00:10:42.458 Optional Asynchronous Events Supported 00:10:42.458 Namespace Attribute Notices: Supported 00:10:42.458 Firmware Activation Notices: Not Supported 00:10:42.458 ANA Change Notices: Not Supported 00:10:42.458 PLE Aggregate Log Change Notices: Not Supported 00:10:42.458 LBA Status Info Alert Notices: Not Supported 00:10:42.458 EGE Aggregate Log Change Notices: Not Supported 00:10:42.458 Normal NVM Subsystem Shutdown event: Not Supported 00:10:42.458 Zone Descriptor Change Notices: Not Supported 00:10:42.458 Discovery Log Change Notices: Not Supported 00:10:42.458 Controller Attributes 00:10:42.458 128-bit Host Identifier: Not Supported 00:10:42.458 Non-Operational Permissive Mode: Not Supported 00:10:42.458 NVM Sets: Not Supported 00:10:42.458 Read Recovery Levels: Not Supported 00:10:42.458 Endurance Groups: Not Supported 00:10:42.458 Predictable Latency Mode: Not Supported 00:10:42.458 Traffic Based Keep ALive: Not Supported 00:10:42.458 Namespace Granularity: Not Supported 00:10:42.458 SQ Associations: Not Supported 00:10:42.458 UUID List: Not Supported 00:10:42.458 Multi-Domain Subsystem: Not Supported 00:10:42.458 Fixed Capacity Management: Not Supported 00:10:42.458 Variable Capacity Management: Not Supported 00:10:42.458 Delete Endurance Group: Not Supported 00:10:42.458 Delete NVM Set: Not Supported 00:10:42.458 Extended LBA Formats Supported: Supported 00:10:42.458 Flexible Data Placement Supported: Not Supported 00:10:42.458 00:10:42.458 Controller Memory Buffer Support 00:10:42.458 ================================ 00:10:42.458 Supported: No 00:10:42.458 00:10:42.458 Persistent Memory Region Support 00:10:42.458 ================================ 00:10:42.458 Supported: No 00:10:42.458 00:10:42.458 Admin Command Set Attributes 00:10:42.458 ============================ 00:10:42.458 Security Send/Receive: Not Supported 00:10:42.458 Format NVM: Supported 00:10:42.458 Firmware Activate/Download: Not Supported 00:10:42.458 Namespace Management: Supported 00:10:42.458 Device Self-Test: Not Supported 00:10:42.458 Directives: Supported 00:10:42.458 NVMe-MI: Not Supported 00:10:42.458 Virtualization Management: Not Supported 00:10:42.458 Doorbell Buffer Config: Supported 00:10:42.458 Get LBA Status Capability: Not Supported 00:10:42.458 Command & Feature Lockdown Capability: Not Supported 00:10:42.458 Abort Command Limit: 4 00:10:42.459 Async Event Request Limit: 4 00:10:42.459 Number of Firmware Slots: N/A 00:10:42.459 Firmware Slot 1 Read-Only: N/A 00:10:42.459 Firmware Activation Without Reset: N/A 00:10:42.459 Multiple Update Detection Support: N/A 00:10:42.459 Firmware Update Granularity: No Information Provided 00:10:42.459 Per-Namespace SMART Log: Yes 00:10:42.459 Asymmetric Namespace Access Log Page: Not Supported 00:10:42.459 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:10:42.459 Command Effects Log Page: Supported 00:10:42.459 Get Log Page Extended Data: Supported 00:10:42.459 Telemetry Log Pages: Not Supported 00:10:42.459 Persistent Event Log Pages: Not Supported 00:10:42.459 Supported Log Pages Log Page: May Support 00:10:42.459 Commands Supported & Effects Log Page: Not Supported 00:10:42.459 Feature Identifiers & Effects Log Page:May Support 00:10:42.459 NVMe-MI Commands & Effects Log Page: May Support 00:10:42.459 Data Area 4 for Telemetry Log: Not Supported 00:10:42.459 Error Log Page Entries Supported: 1 00:10:42.459 Keep Alive: Not Supported 00:10:42.459 00:10:42.459 NVM Command Set Attributes 00:10:42.459 ========================== 00:10:42.459 Submission Queue Entry Size 00:10:42.459 Max: 64 00:10:42.459 Min: 64 00:10:42.459 Completion Queue Entry Size 00:10:42.459 Max: 16 00:10:42.459 Min: 16 00:10:42.459 Number of Namespaces: 256 00:10:42.459 Compare Command: Supported 00:10:42.459 Write Uncorrectable Command: Not Supported 00:10:42.459 Dataset Management Command: Supported 00:10:42.459 Write Zeroes Command: Supported 00:10:42.459 Set Features Save Field: Supported 00:10:42.459 Reservations: Not Supported 00:10:42.459 Timestamp: Supported 00:10:42.459 Copy: Supported 00:10:42.459 Volatile Write Cache: Present 00:10:42.459 Atomic Write Unit (Normal): 1 00:10:42.459 Atomic Write Unit (PFail): 1 00:10:42.459 Atomic Compare & Write Unit: 1 00:10:42.459 Fused Compare & Write: Not Supported 00:10:42.459 Scatter-Gather List 00:10:42.459 SGL Command Set: Supported 00:10:42.459 SGL Keyed: Not Supported 00:10:42.459 SGL Bit Bucket Descriptor: Not Supported 00:10:42.459 SGL Metadata Pointer: Not Supported 00:10:42.459 Oversized SGL: Not Supported 00:10:42.459 SGL Metadata Address: Not Supported 00:10:42.459 SGL Offset: Not Supported 00:10:42.459 Transport SGL Data Block: Not Supported 00:10:42.459 Replay Protected Memory Block: Not Supported 00:10:42.459 00:10:42.459 Firmware Slot Information 00:10:42.459 ========================= 00:10:42.459 Active slot: 1 00:10:42.459 Slot 1 Firmware Revision: 1.0 00:10:42.459 00:10:42.459 00:10:42.459 Commands Supported and Effects 00:10:42.459 ============================== 00:10:42.459 Admin Commands 00:10:42.459 -------------- 00:10:42.459 Delete I/O Submission Queue (00h): Supported 00:10:42.459 Create I/O Submission Queue (01h): Supported 00:10:42.459 Get Log Page (02h): Supported 00:10:42.459 Delete I/O Completion Queue (04h): Supported 00:10:42.459 Create I/O Completion Queue (05h): Supported 00:10:42.459 Identify (06h): Supported 00:10:42.459 Abort (08h): Supported 00:10:42.459 Set Features (09h): Supported 00:10:42.459 Get Features (0Ah): Supported 00:10:42.459 Asynchronous Event Request (0Ch): Supported 00:10:42.459 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:42.459 Directive Send (19h): Supported 00:10:42.459 Directive Receive (1Ah): Supported 00:10:42.459 Virtualization Management (1Ch): Supported 00:10:42.459 Doorbell Buffer Config (7Ch): Supported 00:10:42.459 Format NVM (80h): Supported LBA-Change 00:10:42.459 I/O Commands 00:10:42.459 ------------ 00:10:42.459 Flush (00h): Supported LBA-Change 00:10:42.459 Write (01h): Supported LBA-Change 00:10:42.459 Read (02h): Supported 00:10:42.459 Compare (05h): Supported 00:10:42.459 Write Zeroes (08h): Supported LBA-Change 00:10:42.459 Dataset Management (09h): Supported LBA-Change 00:10:42.459 Unknown (0Ch): Supported 00:10:42.459 Unknown (12h): Supported 00:10:42.459 Copy (19h): Supported LBA-Change 00:10:42.459 Unknown (1Dh): Supported LBA-Change 00:10:42.459 00:10:42.459 Error Log 00:10:42.459 ========= 00:10:42.459 00:10:42.459 Arbitration 00:10:42.459 =========== 00:10:42.459 Arbitration Burst: no limit 00:10:42.459 00:10:42.459 Power Management 00:10:42.459 ================ 00:10:42.459 Number of Power States: 1 00:10:42.459 Current Power State: Power State #0 00:10:42.459 Power State #0: 00:10:42.459 Max Power: 25.00 W 00:10:42.459 Non-Operational State: Operational 00:10:42.459 Entry Latency: 16 microseconds 00:10:42.459 Exit Latency: 4 microseconds 00:10:42.459 Relative Read Throughput: 0 00:10:42.459 Relative Read Latency: 0 00:10:42.459 Relative Write Throughput: 0 00:10:42.459 Relative Write Latency: 0 00:10:42.733 Idle Power: Not Reported 00:10:42.733 Active Power: Not Reported 00:10:42.733 Non-Operational Permissive Mode: Not Supported 00:10:42.733 00:10:42.733 Health Information 00:10:42.733 ================== 00:10:42.733 Critical Warnings: 00:10:42.733 Available Spare Space: OK 00:10:42.733 Temperature: OK 00:10:42.733 Device Reliability: OK 00:10:42.733 Read Only: No 00:10:42.733 Volatile Memory Backup: OK 00:10:42.733 Current Temperature: 323 Kelvin (50 Celsius) 00:10:42.733 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:42.733 Available Spare: 0% 00:10:42.733 Available Spare Threshold: 0% 00:10:42.733 Life Percentage Used: 0% 00:10:42.733 Data Units Read: 870 00:10:42.733 Data Units Written: 715 00:10:42.733 Host Read Commands: 39501 00:10:42.733 Host Write Commands: 37135 00:10:42.733 Controller Busy Time: 0 minutes 00:10:42.733 Power Cycles: 0 00:10:42.733 Power On Hours: 0 hours 00:10:42.733 Unsafe Shutdowns: 0 00:10:42.733 Unrecoverable Media Errors: 0 00:10:42.733 Lifetime Error Log Entries: 0 00:10:42.733 Warning Temperature Time: 0 minutes 00:10:42.733 Critical Temperature Time: 0 minutes 00:10:42.733 00:10:42.733 Number of Queues 00:10:42.733 ================ 00:10:42.733 Number of I/O Submission Queues: 64 00:10:42.733 Number of I/O Completion Queues: 64 00:10:42.733 00:10:42.733 ZNS Specific Controller Data 00:10:42.733 ============================ 00:10:42.733 Zone Append Size Limit: 0 00:10:42.733 00:10:42.733 00:10:42.733 Active Namespaces 00:10:42.733 ================= 00:10:42.733 Namespace ID:1 00:10:42.733 Error Recovery Timeout: Unlimited 00:10:42.733 Command Set Identifier: NVM (00h) 00:10:42.733 Deallocate: Supported 00:10:42.733 Deallocated/Unwritten Error: Supported 00:10:42.733 Deallocated Read Value: All 0x00 00:10:42.733 Deallocate in Write Zeroes: Not Supported 00:10:42.733 Deallocated Guard Field: 0xFFFF 00:10:42.733 Flush: Supported 00:10:42.733 Reservation: Not Supported 00:10:42.733 Namespace Sharing Capabilities: Private 00:10:42.733 Size (in LBAs): 1310720 (5GiB) 00:10:42.733 Capacity (in LBAs): 1310720 (5GiB) 00:10:42.733 Utilization (in LBAs): 1310720 (5GiB) 00:10:42.733 Thin Provisioning: Not Supported 00:10:42.733 Per-NS Atomic Units: No 00:10:42.733 Maximum Single Source Range Length: 128 00:10:42.733 Maximum Copy Length: 128 00:10:42.733 Maximum Source Range Count: 128 00:10:42.733 NGUID/EUI64 Never Reused: No 00:10:42.733 Namespace Write Protected: No 00:10:42.733 Number of LBA Formats: 8 00:10:42.733 Current LBA Format: LBA Format #04 00:10:42.733 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:42.733 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:42.733 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:42.733 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:42.733 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:42.733 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:42.733 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:42.733 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:42.733 00:10:42.733 20:17:12 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:42.733 20:17:12 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:10:42.997 ===================================================== 00:10:42.997 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:42.997 ===================================================== 00:10:42.997 Controller Capabilities/Features 00:10:42.997 ================================ 00:10:42.997 Vendor ID: 1b36 00:10:42.997 Subsystem Vendor ID: 1af4 00:10:42.997 Serial Number: 12342 00:10:42.997 Model Number: QEMU NVMe Ctrl 00:10:42.997 Firmware Version: 8.0.0 00:10:42.997 Recommended Arb Burst: 6 00:10:42.997 IEEE OUI Identifier: 00 54 52 00:10:42.997 Multi-path I/O 00:10:42.997 May have multiple subsystem ports: No 00:10:42.997 May have multiple controllers: No 00:10:42.997 Associated with SR-IOV VF: No 00:10:42.997 Max Data Transfer Size: 524288 00:10:42.997 Max Number of Namespaces: 256 00:10:42.997 Max Number of I/O Queues: 64 00:10:42.997 NVMe Specification Version (VS): 1.4 00:10:42.997 NVMe Specification Version (Identify): 1.4 00:10:42.997 Maximum Queue Entries: 2048 00:10:42.997 Contiguous Queues Required: Yes 00:10:42.997 Arbitration Mechanisms Supported 00:10:42.997 Weighted Round Robin: Not Supported 00:10:42.997 Vendor Specific: Not Supported 00:10:42.997 Reset Timeout: 7500 ms 00:10:42.997 Doorbell Stride: 4 bytes 00:10:42.997 NVM Subsystem Reset: Not Supported 00:10:42.997 Command Sets Supported 00:10:42.997 NVM Command Set: Supported 00:10:42.997 Boot Partition: Not Supported 00:10:42.997 Memory Page Size Minimum: 4096 bytes 00:10:42.997 Memory Page Size Maximum: 65536 bytes 00:10:42.997 Persistent Memory Region: Not Supported 00:10:42.997 Optional Asynchronous Events Supported 00:10:42.997 Namespace Attribute Notices: Supported 00:10:42.997 Firmware Activation Notices: Not Supported 00:10:42.997 ANA Change Notices: Not Supported 00:10:42.997 PLE Aggregate Log Change Notices: Not Supported 00:10:42.997 LBA Status Info Alert Notices: Not Supported 00:10:42.997 EGE Aggregate Log Change Notices: Not Supported 00:10:42.997 Normal NVM Subsystem Shutdown event: Not Supported 00:10:42.997 Zone Descriptor Change Notices: Not Supported 00:10:42.997 Discovery Log Change Notices: Not Supported 00:10:42.997 Controller Attributes 00:10:42.997 128-bit Host Identifier: Not Supported 00:10:42.997 Non-Operational Permissive Mode: Not Supported 00:10:42.997 NVM Sets: Not Supported 00:10:42.997 Read Recovery Levels: Not Supported 00:10:42.997 Endurance Groups: Not Supported 00:10:42.997 Predictable Latency Mode: Not Supported 00:10:42.997 Traffic Based Keep ALive: Not Supported 00:10:42.997 Namespace Granularity: Not Supported 00:10:42.997 SQ Associations: Not Supported 00:10:42.997 UUID List: Not Supported 00:10:42.997 Multi-Domain Subsystem: Not Supported 00:10:42.997 Fixed Capacity Management: Not Supported 00:10:42.997 Variable Capacity Management: Not Supported 00:10:42.997 Delete Endurance Group: Not Supported 00:10:42.997 Delete NVM Set: Not Supported 00:10:42.997 Extended LBA Formats Supported: Supported 00:10:42.997 Flexible Data Placement Supported: Not Supported 00:10:42.997 00:10:42.997 Controller Memory Buffer Support 00:10:42.997 ================================ 00:10:42.997 Supported: No 00:10:42.997 00:10:42.997 Persistent Memory Region Support 00:10:42.997 ================================ 00:10:42.997 Supported: No 00:10:42.997 00:10:42.997 Admin Command Set Attributes 00:10:42.997 ============================ 00:10:42.997 Security Send/Receive: Not Supported 00:10:42.997 Format NVM: Supported 00:10:42.997 Firmware Activate/Download: Not Supported 00:10:42.997 Namespace Management: Supported 00:10:42.997 Device Self-Test: Not Supported 00:10:42.997 Directives: Supported 00:10:42.997 NVMe-MI: Not Supported 00:10:42.997 Virtualization Management: Not Supported 00:10:42.997 Doorbell Buffer Config: Supported 00:10:42.997 Get LBA Status Capability: Not Supported 00:10:42.997 Command & Feature Lockdown Capability: Not Supported 00:10:42.997 Abort Command Limit: 4 00:10:42.997 Async Event Request Limit: 4 00:10:42.997 Number of Firmware Slots: N/A 00:10:42.997 Firmware Slot 1 Read-Only: N/A 00:10:42.997 Firmware Activation Without Reset: N/A 00:10:42.997 Multiple Update Detection Support: N/A 00:10:42.997 Firmware Update Granularity: No Information Provided 00:10:42.997 Per-Namespace SMART Log: Yes 00:10:42.997 Asymmetric Namespace Access Log Page: Not Supported 00:10:42.997 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:10:42.997 Command Effects Log Page: Supported 00:10:42.997 Get Log Page Extended Data: Supported 00:10:42.997 Telemetry Log Pages: Not Supported 00:10:42.997 Persistent Event Log Pages: Not Supported 00:10:42.997 Supported Log Pages Log Page: May Support 00:10:42.997 Commands Supported & Effects Log Page: Not Supported 00:10:42.997 Feature Identifiers & Effects Log Page:May Support 00:10:42.998 NVMe-MI Commands & Effects Log Page: May Support 00:10:42.998 Data Area 4 for Telemetry Log: Not Supported 00:10:42.998 Error Log Page Entries Supported: 1 00:10:42.998 Keep Alive: Not Supported 00:10:42.998 00:10:42.998 NVM Command Set Attributes 00:10:42.998 ========================== 00:10:42.998 Submission Queue Entry Size 00:10:42.998 Max: 64 00:10:42.998 Min: 64 00:10:42.998 Completion Queue Entry Size 00:10:42.998 Max: 16 00:10:42.998 Min: 16 00:10:42.998 Number of Namespaces: 256 00:10:42.998 Compare Command: Supported 00:10:42.998 Write Uncorrectable Command: Not Supported 00:10:42.998 Dataset Management Command: Supported 00:10:42.998 Write Zeroes Command: Supported 00:10:42.998 Set Features Save Field: Supported 00:10:42.998 Reservations: Not Supported 00:10:42.998 Timestamp: Supported 00:10:42.998 Copy: Supported 00:10:42.998 Volatile Write Cache: Present 00:10:42.998 Atomic Write Unit (Normal): 1 00:10:42.998 Atomic Write Unit (PFail): 1 00:10:42.998 Atomic Compare & Write Unit: 1 00:10:42.998 Fused Compare & Write: Not Supported 00:10:42.998 Scatter-Gather List 00:10:42.998 SGL Command Set: Supported 00:10:42.998 SGL Keyed: Not Supported 00:10:42.998 SGL Bit Bucket Descriptor: Not Supported 00:10:42.998 SGL Metadata Pointer: Not Supported 00:10:42.998 Oversized SGL: Not Supported 00:10:42.998 SGL Metadata Address: Not Supported 00:10:42.998 SGL Offset: Not Supported 00:10:42.998 Transport SGL Data Block: Not Supported 00:10:42.998 Replay Protected Memory Block: Not Supported 00:10:42.998 00:10:42.998 Firmware Slot Information 00:10:42.998 ========================= 00:10:42.998 Active slot: 1 00:10:42.998 Slot 1 Firmware Revision: 1.0 00:10:42.998 00:10:42.998 00:10:42.998 Commands Supported and Effects 00:10:42.998 ============================== 00:10:42.998 Admin Commands 00:10:42.998 -------------- 00:10:42.998 Delete I/O Submission Queue (00h): Supported 00:10:42.998 Create I/O Submission Queue (01h): Supported 00:10:42.998 Get Log Page (02h): Supported 00:10:42.998 Delete I/O Completion Queue (04h): Supported 00:10:42.998 Create I/O Completion Queue (05h): Supported 00:10:42.998 Identify (06h): Supported 00:10:42.998 Abort (08h): Supported 00:10:42.998 Set Features (09h): Supported 00:10:42.998 Get Features (0Ah): Supported 00:10:42.998 Asynchronous Event Request (0Ch): Supported 00:10:42.998 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:42.998 Directive Send (19h): Supported 00:10:42.998 Directive Receive (1Ah): Supported 00:10:42.998 Virtualization Management (1Ch): Supported 00:10:42.998 Doorbell Buffer Config (7Ch): Supported 00:10:42.998 Format NVM (80h): Supported LBA-Change 00:10:42.998 I/O Commands 00:10:42.998 ------------ 00:10:42.998 Flush (00h): Supported LBA-Change 00:10:42.998 Write (01h): Supported LBA-Change 00:10:42.998 Read (02h): Supported 00:10:42.998 Compare (05h): Supported 00:10:42.998 Write Zeroes (08h): Supported LBA-Change 00:10:42.998 Dataset Management (09h): Supported LBA-Change 00:10:42.998 Unknown (0Ch): Supported 00:10:42.998 Unknown (12h): Supported 00:10:42.998 Copy (19h): Supported LBA-Change 00:10:42.998 Unknown (1Dh): Supported LBA-Change 00:10:42.998 00:10:42.998 Error Log 00:10:42.998 ========= 00:10:42.998 00:10:42.998 Arbitration 00:10:42.998 =========== 00:10:42.998 Arbitration Burst: no limit 00:10:42.998 00:10:42.998 Power Management 00:10:42.998 ================ 00:10:42.998 Number of Power States: 1 00:10:42.998 Current Power State: Power State #0 00:10:42.998 Power State #0: 00:10:42.998 Max Power: 25.00 W 00:10:42.998 Non-Operational State: Operational 00:10:42.998 Entry Latency: 16 microseconds 00:10:42.998 Exit Latency: 4 microseconds 00:10:42.998 Relative Read Throughput: 0 00:10:42.998 Relative Read Latency: 0 00:10:42.998 Relative Write Throughput: 0 00:10:42.998 Relative Write Latency: 0 00:10:42.998 Idle Power: Not Reported 00:10:42.998 Active Power: Not Reported 00:10:42.998 Non-Operational Permissive Mode: Not Supported 00:10:42.998 00:10:42.998 Health Information 00:10:42.998 ================== 00:10:42.998 Critical Warnings: 00:10:42.998 Available Spare Space: OK 00:10:42.998 Temperature: OK 00:10:42.998 Device Reliability: OK 00:10:42.998 Read Only: No 00:10:42.998 Volatile Memory Backup: OK 00:10:42.998 Current Temperature: 323 Kelvin (50 Celsius) 00:10:42.998 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:42.998 Available Spare: 0% 00:10:42.998 Available Spare Threshold: 0% 00:10:42.998 Life Percentage Used: 0% 00:10:42.998 Data Units Read: 2551 00:10:42.998 Data Units Written: 2231 00:10:42.998 Host Read Commands: 116813 00:10:42.998 Host Write Commands: 112583 00:10:42.998 Controller Busy Time: 0 minutes 00:10:42.998 Power Cycles: 0 00:10:42.998 Power On Hours: 0 hours 00:10:42.998 Unsafe Shutdowns: 0 00:10:42.998 Unrecoverable Media Errors: 0 00:10:42.998 Lifetime Error Log Entries: 0 00:10:42.998 Warning Temperature Time: 0 minutes 00:10:42.998 Critical Temperature Time: 0 minutes 00:10:42.998 00:10:42.998 Number of Queues 00:10:42.998 ================ 00:10:42.998 Number of I/O Submission Queues: 64 00:10:42.998 Number of I/O Completion Queues: 64 00:10:42.998 00:10:42.998 ZNS Specific Controller Data 00:10:42.998 ============================ 00:10:42.998 Zone Append Size Limit: 0 00:10:42.998 00:10:42.998 00:10:42.998 Active Namespaces 00:10:42.998 ================= 00:10:42.998 Namespace ID:1 00:10:42.998 Error Recovery Timeout: Unlimited 00:10:42.998 Command Set Identifier: NVM (00h) 00:10:42.998 Deallocate: Supported 00:10:42.998 Deallocated/Unwritten Error: Supported 00:10:42.998 Deallocated Read Value: All 0x00 00:10:42.998 Deallocate in Write Zeroes: Not Supported 00:10:42.998 Deallocated Guard Field: 0xFFFF 00:10:42.998 Flush: Supported 00:10:42.998 Reservation: Not Supported 00:10:42.998 Namespace Sharing Capabilities: Private 00:10:42.998 Size (in LBAs): 1048576 (4GiB) 00:10:42.998 Capacity (in LBAs): 1048576 (4GiB) 00:10:42.998 Utilization (in LBAs): 1048576 (4GiB) 00:10:42.998 Thin Provisioning: Not Supported 00:10:42.998 Per-NS Atomic Units: No 00:10:42.998 Maximum Single Source Range Length: 128 00:10:42.998 Maximum Copy Length: 128 00:10:42.998 Maximum Source Range Count: 128 00:10:42.998 NGUID/EUI64 Never Reused: No 00:10:42.998 Namespace Write Protected: No 00:10:42.998 Number of LBA Formats: 8 00:10:42.998 Current LBA Format: LBA Format #04 00:10:42.998 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:42.998 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:42.998 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:42.998 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:42.998 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:42.998 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:42.998 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:42.998 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:42.998 00:10:42.998 Namespace ID:2 00:10:42.998 Error Recovery Timeout: Unlimited 00:10:42.998 Command Set Identifier: NVM (00h) 00:10:42.998 Deallocate: Supported 00:10:42.999 Deallocated/Unwritten Error: Supported 00:10:42.999 Deallocated Read Value: All 0x00 00:10:42.999 Deallocate in Write Zeroes: Not Supported 00:10:42.999 Deallocated Guard Field: 0xFFFF 00:10:42.999 Flush: Supported 00:10:42.999 Reservation: Not Supported 00:10:42.999 Namespace Sharing Capabilities: Private 00:10:42.999 Size (in LBAs): 1048576 (4GiB) 00:10:42.999 Capacity (in LBAs): 1048576 (4GiB) 00:10:42.999 Utilization (in LBAs): 1048576 (4GiB) 00:10:42.999 Thin Provisioning: Not Supported 00:10:42.999 Per-NS Atomic Units: No 00:10:42.999 Maximum Single Source Range Length: 128 00:10:42.999 Maximum Copy Length: 128 00:10:42.999 Maximum Source Range Count: 128 00:10:42.999 NGUID/EUI64 Never Reused: No 00:10:42.999 Namespace Write Protected: No 00:10:42.999 Number of LBA Formats: 8 00:10:42.999 Current LBA Format: LBA Format #04 00:10:42.999 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:42.999 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:42.999 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:42.999 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:42.999 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:42.999 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:42.999 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:42.999 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:42.999 00:10:42.999 Namespace ID:3 00:10:42.999 Error Recovery Timeout: Unlimited 00:10:42.999 Command Set Identifier: NVM (00h) 00:10:42.999 Deallocate: Supported 00:10:42.999 Deallocated/Unwritten Error: Supported 00:10:42.999 Deallocated Read Value: All 0x00 00:10:42.999 Deallocate in Write Zeroes: Not Supported 00:10:42.999 Deallocated Guard Field: 0xFFFF 00:10:42.999 Flush: Supported 00:10:42.999 Reservation: Not Supported 00:10:42.999 Namespace Sharing Capabilities: Private 00:10:42.999 Size (in LBAs): 1048576 (4GiB) 00:10:42.999 Capacity (in LBAs): 1048576 (4GiB) 00:10:42.999 Utilization (in LBAs): 1048576 (4GiB) 00:10:42.999 Thin Provisioning: Not Supported 00:10:42.999 Per-NS Atomic Units: No 00:10:42.999 Maximum Single Source Range Length: 128 00:10:42.999 Maximum Copy Length: 128 00:10:42.999 Maximum Source Range Count: 128 00:10:42.999 NGUID/EUI64 Never Reused: No 00:10:42.999 Namespace Write Protected: No 00:10:42.999 Number of LBA Formats: 8 00:10:42.999 Current LBA Format: LBA Format #04 00:10:42.999 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:42.999 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:42.999 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:42.999 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:42.999 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:42.999 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:42.999 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:42.999 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:42.999 00:10:42.999 20:17:13 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:10:42.999 20:17:13 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:10:43.259 ===================================================== 00:10:43.259 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:43.259 ===================================================== 00:10:43.259 Controller Capabilities/Features 00:10:43.259 ================================ 00:10:43.259 Vendor ID: 1b36 00:10:43.259 Subsystem Vendor ID: 1af4 00:10:43.259 Serial Number: 12343 00:10:43.259 Model Number: QEMU NVMe Ctrl 00:10:43.259 Firmware Version: 8.0.0 00:10:43.259 Recommended Arb Burst: 6 00:10:43.259 IEEE OUI Identifier: 00 54 52 00:10:43.259 Multi-path I/O 00:10:43.259 May have multiple subsystem ports: No 00:10:43.259 May have multiple controllers: Yes 00:10:43.259 Associated with SR-IOV VF: No 00:10:43.259 Max Data Transfer Size: 524288 00:10:43.259 Max Number of Namespaces: 256 00:10:43.259 Max Number of I/O Queues: 64 00:10:43.259 NVMe Specification Version (VS): 1.4 00:10:43.259 NVMe Specification Version (Identify): 1.4 00:10:43.259 Maximum Queue Entries: 2048 00:10:43.259 Contiguous Queues Required: Yes 00:10:43.259 Arbitration Mechanisms Supported 00:10:43.259 Weighted Round Robin: Not Supported 00:10:43.259 Vendor Specific: Not Supported 00:10:43.259 Reset Timeout: 7500 ms 00:10:43.259 Doorbell Stride: 4 bytes 00:10:43.259 NVM Subsystem Reset: Not Supported 00:10:43.259 Command Sets Supported 00:10:43.259 NVM Command Set: Supported 00:10:43.259 Boot Partition: Not Supported 00:10:43.259 Memory Page Size Minimum: 4096 bytes 00:10:43.259 Memory Page Size Maximum: 65536 bytes 00:10:43.259 Persistent Memory Region: Not Supported 00:10:43.259 Optional Asynchronous Events Supported 00:10:43.259 Namespace Attribute Notices: Supported 00:10:43.259 Firmware Activation Notices: Not Supported 00:10:43.259 ANA Change Notices: Not Supported 00:10:43.259 PLE Aggregate Log Change Notices: Not Supported 00:10:43.259 LBA Status Info Alert Notices: Not Supported 00:10:43.259 EGE Aggregate Log Change Notices: Not Supported 00:10:43.259 Normal NVM Subsystem Shutdown event: Not Supported 00:10:43.259 Zone Descriptor Change Notices: Not Supported 00:10:43.259 Discovery Log Change Notices: Not Supported 00:10:43.259 Controller Attributes 00:10:43.259 128-bit Host Identifier: Not Supported 00:10:43.259 Non-Operational Permissive Mode: Not Supported 00:10:43.259 NVM Sets: Not Supported 00:10:43.259 Read Recovery Levels: Not Supported 00:10:43.259 Endurance Groups: Supported 00:10:43.259 Predictable Latency Mode: Not Supported 00:10:43.259 Traffic Based Keep ALive: Not Supported 00:10:43.259 Namespace Granularity: Not Supported 00:10:43.259 SQ Associations: Not Supported 00:10:43.259 UUID List: Not Supported 00:10:43.259 Multi-Domain Subsystem: Not Supported 00:10:43.259 Fixed Capacity Management: Not Supported 00:10:43.259 Variable Capacity Management: Not Supported 00:10:43.259 Delete Endurance Group: Not Supported 00:10:43.259 Delete NVM Set: Not Supported 00:10:43.259 Extended LBA Formats Supported: Supported 00:10:43.259 Flexible Data Placement Supported: Supported 00:10:43.259 00:10:43.259 Controller Memory Buffer Support 00:10:43.259 ================================ 00:10:43.259 Supported: No 00:10:43.259 00:10:43.259 Persistent Memory Region Support 00:10:43.259 ================================ 00:10:43.259 Supported: No 00:10:43.259 00:10:43.259 Admin Command Set Attributes 00:10:43.259 ============================ 00:10:43.259 Security Send/Receive: Not Supported 00:10:43.259 Format NVM: Supported 00:10:43.259 Firmware Activate/Download: Not Supported 00:10:43.259 Namespace Management: Supported 00:10:43.259 Device Self-Test: Not Supported 00:10:43.259 Directives: Supported 00:10:43.259 NVMe-MI: Not Supported 00:10:43.259 Virtualization Management: Not Supported 00:10:43.259 Doorbell Buffer Config: Supported 00:10:43.259 Get LBA Status Capability: Not Supported 00:10:43.259 Command & Feature Lockdown Capability: Not Supported 00:10:43.259 Abort Command Limit: 4 00:10:43.259 Async Event Request Limit: 4 00:10:43.259 Number of Firmware Slots: N/A 00:10:43.259 Firmware Slot 1 Read-Only: N/A 00:10:43.259 Firmware Activation Without Reset: N/A 00:10:43.259 Multiple Update Detection Support: N/A 00:10:43.259 Firmware Update Granularity: No Information Provided 00:10:43.259 Per-Namespace SMART Log: Yes 00:10:43.259 Asymmetric Namespace Access Log Page: Not Supported 00:10:43.259 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:10:43.259 Command Effects Log Page: Supported 00:10:43.259 Get Log Page Extended Data: Supported 00:10:43.259 Telemetry Log Pages: Not Supported 00:10:43.259 Persistent Event Log Pages: Not Supported 00:10:43.259 Supported Log Pages Log Page: May Support 00:10:43.259 Commands Supported & Effects Log Page: Not Supported 00:10:43.259 Feature Identifiers & Effects Log Page:May Support 00:10:43.259 NVMe-MI Commands & Effects Log Page: May Support 00:10:43.259 Data Area 4 for Telemetry Log: Not Supported 00:10:43.259 Error Log Page Entries Supported: 1 00:10:43.259 Keep Alive: Not Supported 00:10:43.259 00:10:43.259 NVM Command Set Attributes 00:10:43.259 ========================== 00:10:43.259 Submission Queue Entry Size 00:10:43.259 Max: 64 00:10:43.259 Min: 64 00:10:43.259 Completion Queue Entry Size 00:10:43.259 Max: 16 00:10:43.259 Min: 16 00:10:43.259 Number of Namespaces: 256 00:10:43.259 Compare Command: Supported 00:10:43.259 Write Uncorrectable Command: Not Supported 00:10:43.259 Dataset Management Command: Supported 00:10:43.259 Write Zeroes Command: Supported 00:10:43.259 Set Features Save Field: Supported 00:10:43.259 Reservations: Not Supported 00:10:43.259 Timestamp: Supported 00:10:43.259 Copy: Supported 00:10:43.259 Volatile Write Cache: Present 00:10:43.259 Atomic Write Unit (Normal): 1 00:10:43.259 Atomic Write Unit (PFail): 1 00:10:43.259 Atomic Compare & Write Unit: 1 00:10:43.259 Fused Compare & Write: Not Supported 00:10:43.259 Scatter-Gather List 00:10:43.259 SGL Command Set: Supported 00:10:43.259 SGL Keyed: Not Supported 00:10:43.259 SGL Bit Bucket Descriptor: Not Supported 00:10:43.259 SGL Metadata Pointer: Not Supported 00:10:43.259 Oversized SGL: Not Supported 00:10:43.260 SGL Metadata Address: Not Supported 00:10:43.260 SGL Offset: Not Supported 00:10:43.260 Transport SGL Data Block: Not Supported 00:10:43.260 Replay Protected Memory Block: Not Supported 00:10:43.260 00:10:43.260 Firmware Slot Information 00:10:43.260 ========================= 00:10:43.260 Active slot: 1 00:10:43.260 Slot 1 Firmware Revision: 1.0 00:10:43.260 00:10:43.260 00:10:43.260 Commands Supported and Effects 00:10:43.260 ============================== 00:10:43.260 Admin Commands 00:10:43.260 -------------- 00:10:43.260 Delete I/O Submission Queue (00h): Supported 00:10:43.260 Create I/O Submission Queue (01h): Supported 00:10:43.260 Get Log Page (02h): Supported 00:10:43.260 Delete I/O Completion Queue (04h): Supported 00:10:43.260 Create I/O Completion Queue (05h): Supported 00:10:43.260 Identify (06h): Supported 00:10:43.260 Abort (08h): Supported 00:10:43.260 Set Features (09h): Supported 00:10:43.260 Get Features (0Ah): Supported 00:10:43.260 Asynchronous Event Request (0Ch): Supported 00:10:43.260 Namespace Attachment (15h): Supported NS-Inventory-Change 00:10:43.260 Directive Send (19h): Supported 00:10:43.260 Directive Receive (1Ah): Supported 00:10:43.260 Virtualization Management (1Ch): Supported 00:10:43.260 Doorbell Buffer Config (7Ch): Supported 00:10:43.260 Format NVM (80h): Supported LBA-Change 00:10:43.260 I/O Commands 00:10:43.260 ------------ 00:10:43.260 Flush (00h): Supported LBA-Change 00:10:43.260 Write (01h): Supported LBA-Change 00:10:43.260 Read (02h): Supported 00:10:43.260 Compare (05h): Supported 00:10:43.260 Write Zeroes (08h): Supported LBA-Change 00:10:43.260 Dataset Management (09h): Supported LBA-Change 00:10:43.260 Unknown (0Ch): Supported 00:10:43.260 Unknown (12h): Supported 00:10:43.260 Copy (19h): Supported LBA-Change 00:10:43.260 Unknown (1Dh): Supported LBA-Change 00:10:43.260 00:10:43.260 Error Log 00:10:43.260 ========= 00:10:43.260 00:10:43.260 Arbitration 00:10:43.260 =========== 00:10:43.260 Arbitration Burst: no limit 00:10:43.260 00:10:43.260 Power Management 00:10:43.260 ================ 00:10:43.260 Number of Power States: 1 00:10:43.260 Current Power State: Power State #0 00:10:43.260 Power State #0: 00:10:43.260 Max Power: 25.00 W 00:10:43.260 Non-Operational State: Operational 00:10:43.260 Entry Latency: 16 microseconds 00:10:43.260 Exit Latency: 4 microseconds 00:10:43.260 Relative Read Throughput: 0 00:10:43.260 Relative Read Latency: 0 00:10:43.260 Relative Write Throughput: 0 00:10:43.260 Relative Write Latency: 0 00:10:43.260 Idle Power: Not Reported 00:10:43.260 Active Power: Not Reported 00:10:43.260 Non-Operational Permissive Mode: Not Supported 00:10:43.260 00:10:43.260 Health Information 00:10:43.260 ================== 00:10:43.260 Critical Warnings: 00:10:43.260 Available Spare Space: OK 00:10:43.260 Temperature: OK 00:10:43.260 Device Reliability: OK 00:10:43.260 Read Only: No 00:10:43.260 Volatile Memory Backup: OK 00:10:43.260 Current Temperature: 323 Kelvin (50 Celsius) 00:10:43.260 Temperature Threshold: 343 Kelvin (70 Celsius) 00:10:43.260 Available Spare: 0% 00:10:43.260 Available Spare Threshold: 0% 00:10:43.260 Life Percentage Used: 0% 00:10:43.260 Data Units Read: 921 00:10:43.260 Data Units Written: 814 00:10:43.260 Host Read Commands: 39566 00:10:43.260 Host Write Commands: 38156 00:10:43.260 Controller Busy Time: 0 minutes 00:10:43.260 Power Cycles: 0 00:10:43.260 Power On Hours: 0 hours 00:10:43.260 Unsafe Shutdowns: 0 00:10:43.260 Unrecoverable Media Errors: 0 00:10:43.260 Lifetime Error Log Entries: 0 00:10:43.260 Warning Temperature Time: 0 minutes 00:10:43.260 Critical Temperature Time: 0 minutes 00:10:43.260 00:10:43.260 Number of Queues 00:10:43.260 ================ 00:10:43.260 Number of I/O Submission Queues: 64 00:10:43.260 Number of I/O Completion Queues: 64 00:10:43.260 00:10:43.260 ZNS Specific Controller Data 00:10:43.260 ============================ 00:10:43.260 Zone Append Size Limit: 0 00:10:43.260 00:10:43.260 00:10:43.260 Active Namespaces 00:10:43.260 ================= 00:10:43.260 Namespace ID:1 00:10:43.260 Error Recovery Timeout: Unlimited 00:10:43.260 Command Set Identifier: NVM (00h) 00:10:43.260 Deallocate: Supported 00:10:43.260 Deallocated/Unwritten Error: Supported 00:10:43.260 Deallocated Read Value: All 0x00 00:10:43.260 Deallocate in Write Zeroes: Not Supported 00:10:43.260 Deallocated Guard Field: 0xFFFF 00:10:43.260 Flush: Supported 00:10:43.260 Reservation: Not Supported 00:10:43.260 Namespace Sharing Capabilities: Multiple Controllers 00:10:43.260 Size (in LBAs): 262144 (1GiB) 00:10:43.260 Capacity (in LBAs): 262144 (1GiB) 00:10:43.260 Utilization (in LBAs): 262144 (1GiB) 00:10:43.260 Thin Provisioning: Not Supported 00:10:43.260 Per-NS Atomic Units: No 00:10:43.260 Maximum Single Source Range Length: 128 00:10:43.260 Maximum Copy Length: 128 00:10:43.260 Maximum Source Range Count: 128 00:10:43.260 NGUID/EUI64 Never Reused: No 00:10:43.260 Namespace Write Protected: No 00:10:43.260 Endurance group ID: 1 00:10:43.260 Number of LBA Formats: 8 00:10:43.260 Current LBA Format: LBA Format #04 00:10:43.260 LBA Format #00: Data Size: 512 Metadata Size: 0 00:10:43.260 LBA Format #01: Data Size: 512 Metadata Size: 8 00:10:43.260 LBA Format #02: Data Size: 512 Metadata Size: 16 00:10:43.260 LBA Format #03: Data Size: 512 Metadata Size: 64 00:10:43.260 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:10:43.260 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:10:43.260 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:10:43.260 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:10:43.260 00:10:43.260 Get Feature FDP: 00:10:43.260 ================ 00:10:43.260 Enabled: Yes 00:10:43.260 FDP configuration index: 0 00:10:43.260 00:10:43.260 FDP configurations log page 00:10:43.260 =========================== 00:10:43.260 Number of FDP configurations: 1 00:10:43.260 Version: 0 00:10:43.260 Size: 112 00:10:43.260 FDP Configuration Descriptor: 0 00:10:43.260 Descriptor Size: 96 00:10:43.260 Reclaim Group Identifier format: 2 00:10:43.260 FDP Volatile Write Cache: Not Present 00:10:43.260 FDP Configuration: Valid 00:10:43.260 Vendor Specific Size: 0 00:10:43.260 Number of Reclaim Groups: 2 00:10:43.260 Number of Recalim Unit Handles: 8 00:10:43.260 Max Placement Identifiers: 128 00:10:43.260 Number of Namespaces Suppprted: 256 00:10:43.260 Reclaim unit Nominal Size: 6000000 bytes 00:10:43.260 Estimated Reclaim Unit Time Limit: Not Reported 00:10:43.260 RUH Desc #000: RUH Type: Initially Isolated 00:10:43.260 RUH Desc #001: RUH Type: Initially Isolated 00:10:43.260 RUH Desc #002: RUH Type: Initially Isolated 00:10:43.260 RUH Desc #003: RUH Type: Initially Isolated 00:10:43.260 RUH Desc #004: RUH Type: Initially Isolated 00:10:43.260 RUH Desc #005: RUH Type: Initially Isolated 00:10:43.260 RUH Desc #006: RUH Type: Initially Isolated 00:10:43.260 RUH Desc #007: RUH Type: Initially Isolated 00:10:43.260 00:10:43.260 FDP reclaim unit handle usage log page 00:10:43.260 ====================================== 00:10:43.260 Number of Reclaim Unit Handles: 8 00:10:43.260 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:43.260 RUH Usage Desc #001: RUH Attributes: Unused 00:10:43.260 RUH Usage Desc #002: RUH Attributes: Unused 00:10:43.260 RUH Usage Desc #003: RUH Attributes: Unused 00:10:43.260 RUH Usage Desc #004: RUH Attributes: Unused 00:10:43.260 RUH Usage Desc #005: RUH Attributes: Unused 00:10:43.260 RUH Usage Desc #006: RUH Attributes: Unused 00:10:43.260 RUH Usage Desc #007: RUH Attributes: Unused 00:10:43.260 00:10:43.260 FDP statistics log page 00:10:43.260 ======================= 00:10:43.260 Host bytes with metadata written: 516005888 00:10:43.260 Media bytes with metadata written: 516063232 00:10:43.260 Media bytes erased: 0 00:10:43.260 00:10:43.260 FDP events log page 00:10:43.260 =================== 00:10:43.260 Number of FDP events: 0 00:10:43.260 00:10:43.260 00:10:43.260 real 0m1.738s 00:10:43.260 user 0m0.722s 00:10:43.260 sys 0m0.796s 00:10:43.260 20:17:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:43.260 20:17:13 -- common/autotest_common.sh@10 -- # set +x 00:10:43.260 ************************************ 00:10:43.260 END TEST nvme_identify 00:10:43.260 ************************************ 00:10:43.260 20:17:13 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:10:43.261 20:17:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:43.261 20:17:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:43.261 20:17:13 -- common/autotest_common.sh@10 -- # set +x 00:10:43.261 ************************************ 00:10:43.261 START TEST nvme_perf 00:10:43.261 ************************************ 00:10:43.520 20:17:13 -- common/autotest_common.sh@1111 -- # nvme_perf 00:10:43.520 20:17:13 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:44.900 Initializing NVMe Controllers 00:10:44.900 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:44.900 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:44.900 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:44.900 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:44.900 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:44.900 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:44.900 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:44.900 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:44.900 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:44.900 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:44.900 Initialization complete. Launching workers. 00:10:44.900 ======================================================== 00:10:44.900 Latency(us) 00:10:44.900 Device Information : IOPS MiB/s Average min max 00:10:44.900 PCIE (0000:00:10.0) NSID 1 from core 0: 13439.61 157.50 9531.75 7772.92 63473.51 00:10:44.900 PCIE (0000:00:11.0) NSID 1 from core 0: 13439.61 157.50 9498.64 7886.90 59921.04 00:10:44.900 PCIE (0000:00:13.0) NSID 1 from core 0: 13439.61 157.50 9464.93 7881.12 56804.18 00:10:44.900 PCIE (0000:00:12.0) NSID 1 from core 0: 13439.61 157.50 9431.52 7860.83 53372.31 00:10:44.900 PCIE (0000:00:12.0) NSID 2 from core 0: 13439.61 157.50 9397.64 7842.71 49969.40 00:10:44.900 PCIE (0000:00:12.0) NSID 3 from core 0: 13503.61 158.25 9319.06 7917.48 40077.28 00:10:44.900 ======================================================== 00:10:44.900 Total : 80701.66 945.72 9440.49 7772.92 63473.51 00:10:44.900 00:10:44.900 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:44.900 ================================================================================= 00:10:44.900 1.00000% : 8053.822us 00:10:44.900 10.00000% : 8317.018us 00:10:44.900 25.00000% : 8580.215us 00:10:44.900 50.00000% : 8896.051us 00:10:44.900 75.00000% : 9264.527us 00:10:44.900 90.00000% : 9896.199us 00:10:44.900 95.00000% : 11001.626us 00:10:44.900 98.00000% : 14423.184us 00:10:44.900 99.00000% : 15897.086us 00:10:44.900 99.50000% : 53692.145us 00:10:44.900 99.90000% : 63167.229us 00:10:44.900 99.99000% : 63588.344us 00:10:44.900 99.99900% : 63588.344us 00:10:44.900 99.99990% : 63588.344us 00:10:44.900 99.99999% : 63588.344us 00:10:44.900 00:10:44.900 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:44.900 ================================================================================= 00:10:44.900 1.00000% : 8106.461us 00:10:44.900 10.00000% : 8369.658us 00:10:44.900 25.00000% : 8580.215us 00:10:44.900 50.00000% : 8896.051us 00:10:44.900 75.00000% : 9264.527us 00:10:44.900 90.00000% : 9843.560us 00:10:44.900 95.00000% : 10948.986us 00:10:44.900 98.00000% : 14423.184us 00:10:44.900 99.00000% : 15791.807us 00:10:44.900 99.50000% : 50744.341us 00:10:44.900 99.90000% : 59377.195us 00:10:44.900 99.99000% : 60219.425us 00:10:44.900 99.99900% : 60219.425us 00:10:44.900 99.99990% : 60219.425us 00:10:44.900 99.99999% : 60219.425us 00:10:44.900 00:10:44.900 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:44.900 ================================================================================= 00:10:44.900 1.00000% : 8106.461us 00:10:44.900 10.00000% : 8369.658us 00:10:44.900 25.00000% : 8580.215us 00:10:44.900 50.00000% : 8896.051us 00:10:44.900 75.00000% : 9264.527us 00:10:44.900 90.00000% : 9843.560us 00:10:44.900 95.00000% : 10948.986us 00:10:44.900 98.00000% : 14002.069us 00:10:44.900 99.00000% : 15686.529us 00:10:44.900 99.50000% : 47796.537us 00:10:44.900 99.90000% : 56429.391us 00:10:44.900 99.99000% : 56850.506us 00:10:44.900 99.99900% : 56850.506us 00:10:44.900 99.99990% : 56850.506us 00:10:44.900 99.99999% : 56850.506us 00:10:44.900 00:10:44.900 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:44.900 ================================================================================= 00:10:44.900 1.00000% : 8106.461us 00:10:44.900 10.00000% : 8369.658us 00:10:44.900 25.00000% : 8580.215us 00:10:44.900 50.00000% : 8896.051us 00:10:44.900 75.00000% : 9264.527us 00:10:44.900 90.00000% : 9843.560us 00:10:44.900 95.00000% : 10791.068us 00:10:44.900 98.00000% : 13896.790us 00:10:44.900 99.00000% : 15791.807us 00:10:44.900 99.50000% : 44427.618us 00:10:44.900 99.90000% : 52849.915us 00:10:44.900 99.99000% : 53481.587us 00:10:44.900 99.99900% : 53481.587us 00:10:44.900 99.99990% : 53481.587us 00:10:44.900 99.99999% : 53481.587us 00:10:44.900 00:10:44.900 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:44.900 ================================================================================= 00:10:44.900 1.00000% : 8106.461us 00:10:44.900 10.00000% : 8369.658us 00:10:44.900 25.00000% : 8580.215us 00:10:44.900 50.00000% : 8896.051us 00:10:44.900 75.00000% : 9264.527us 00:10:44.900 90.00000% : 9843.560us 00:10:44.900 95.00000% : 10791.068us 00:10:44.900 98.00000% : 13896.790us 00:10:44.900 99.00000% : 15791.807us 00:10:44.900 99.50000% : 41269.256us 00:10:44.900 99.90000% : 49480.996us 00:10:44.900 99.99000% : 50112.668us 00:10:44.900 99.99900% : 50112.668us 00:10:44.900 99.99990% : 50112.668us 00:10:44.900 99.99999% : 50112.668us 00:10:44.900 00:10:44.900 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:44.900 ================================================================================= 00:10:44.900 1.00000% : 8106.461us 00:10:44.900 10.00000% : 8369.658us 00:10:44.900 25.00000% : 8580.215us 00:10:44.900 50.00000% : 8896.051us 00:10:44.900 75.00000% : 9264.527us 00:10:44.900 90.00000% : 9896.199us 00:10:44.900 95.00000% : 11001.626us 00:10:44.900 98.00000% : 14423.184us 00:10:44.900 99.00000% : 15897.086us 00:10:44.900 99.50000% : 29899.155us 00:10:44.900 99.90000% : 39584.797us 00:10:44.900 99.99000% : 40216.469us 00:10:44.900 99.99900% : 40216.469us 00:10:44.900 99.99990% : 40216.469us 00:10:44.900 99.99999% : 40216.469us 00:10:44.900 00:10:44.900 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:44.900 ============================================================================== 00:10:44.900 Range in us Cumulative IO count 00:10:44.900 7737.986 - 7790.625: 0.0149% ( 2) 00:10:44.900 7790.625 - 7843.264: 0.0521% ( 5) 00:10:44.900 7843.264 - 7895.904: 0.0744% ( 3) 00:10:44.900 7895.904 - 7948.543: 0.2307% ( 21) 00:10:44.900 7948.543 - 8001.182: 0.6994% ( 63) 00:10:44.900 8001.182 - 8053.822: 1.3765% ( 91) 00:10:44.900 8053.822 - 8106.461: 2.5446% ( 157) 00:10:44.900 8106.461 - 8159.100: 4.1295% ( 213) 00:10:44.900 8159.100 - 8211.740: 6.1384% ( 270) 00:10:44.900 8211.740 - 8264.379: 8.4226% ( 307) 00:10:44.900 8264.379 - 8317.018: 10.9524% ( 340) 00:10:44.900 8317.018 - 8369.658: 13.7723% ( 379) 00:10:44.900 8369.658 - 8422.297: 16.9643% ( 429) 00:10:44.900 8422.297 - 8474.937: 20.2009% ( 435) 00:10:44.900 8474.937 - 8527.576: 23.6533% ( 464) 00:10:44.900 8527.576 - 8580.215: 27.2917% ( 489) 00:10:44.900 8580.215 - 8632.855: 31.0417% ( 504) 00:10:44.900 8632.855 - 8685.494: 34.9777% ( 529) 00:10:44.900 8685.494 - 8738.133: 39.0848% ( 552) 00:10:44.900 8738.133 - 8790.773: 43.0804% ( 537) 00:10:44.900 8790.773 - 8843.412: 47.1875% ( 552) 00:10:44.900 8843.412 - 8896.051: 51.4435% ( 572) 00:10:44.900 8896.051 - 8948.691: 55.7664% ( 581) 00:10:44.900 8948.691 - 9001.330: 59.7396% ( 534) 00:10:44.900 9001.330 - 9053.969: 63.5938% ( 518) 00:10:44.900 9053.969 - 9106.609: 66.9717% ( 454) 00:10:44.900 9106.609 - 9159.248: 70.0074% ( 408) 00:10:44.900 9159.248 - 9211.888: 72.7307% ( 366) 00:10:44.900 9211.888 - 9264.527: 75.1190% ( 321) 00:10:44.900 9264.527 - 9317.166: 77.2619% ( 288) 00:10:44.900 9317.166 - 9369.806: 79.1146% ( 249) 00:10:44.900 9369.806 - 9422.445: 80.7961% ( 226) 00:10:44.900 9422.445 - 9475.084: 82.5223% ( 232) 00:10:44.900 9475.084 - 9527.724: 83.9435% ( 191) 00:10:44.900 9527.724 - 9580.363: 85.3720% ( 192) 00:10:44.900 9580.363 - 9633.002: 86.4881% ( 150) 00:10:44.900 9633.002 - 9685.642: 87.4554% ( 130) 00:10:44.900 9685.642 - 9738.281: 88.3110% ( 115) 00:10:44.900 9738.281 - 9790.920: 89.1369% ( 111) 00:10:44.900 9790.920 - 9843.560: 89.8065% ( 90) 00:10:44.900 9843.560 - 9896.199: 90.5580% ( 101) 00:10:44.900 9896.199 - 9948.839: 91.1086% ( 74) 00:10:44.900 9948.839 - 10001.478: 91.5476% ( 59) 00:10:44.900 10001.478 - 10054.117: 91.9420% ( 53) 00:10:44.900 10054.117 - 10106.757: 92.2991% ( 48) 00:10:44.900 10106.757 - 10159.396: 92.6637% ( 49) 00:10:44.900 10159.396 - 10212.035: 92.9762% ( 42) 00:10:44.900 10212.035 - 10264.675: 93.2887% ( 42) 00:10:44.900 10264.675 - 10317.314: 93.6310% ( 46) 00:10:44.900 10317.314 - 10369.953: 93.8988% ( 36) 00:10:44.900 10369.953 - 10422.593: 94.1518% ( 34) 00:10:44.900 10422.593 - 10475.232: 94.3155% ( 22) 00:10:44.900 10475.232 - 10527.871: 94.4717% ( 21) 00:10:44.901 10527.871 - 10580.511: 94.5759% ( 14) 00:10:44.901 10580.511 - 10633.150: 94.6280% ( 7) 00:10:44.901 10633.150 - 10685.790: 94.6875% ( 8) 00:10:44.901 10685.790 - 10738.429: 94.7470% ( 8) 00:10:44.901 10738.429 - 10791.068: 94.8065% ( 8) 00:10:44.901 10791.068 - 10843.708: 94.8586% ( 7) 00:10:44.901 10843.708 - 10896.347: 94.9182% ( 8) 00:10:44.901 10896.347 - 10948.986: 94.9851% ( 9) 00:10:44.901 10948.986 - 11001.626: 95.0149% ( 4) 00:10:44.901 11001.626 - 11054.265: 95.1116% ( 13) 00:10:44.901 11054.265 - 11106.904: 95.1711% ( 8) 00:10:44.901 11106.904 - 11159.544: 95.2604% ( 12) 00:10:44.901 11159.544 - 11212.183: 95.3274% ( 9) 00:10:44.901 11212.183 - 11264.822: 95.3720% ( 6) 00:10:44.901 11264.822 - 11317.462: 95.4464% ( 10) 00:10:44.901 11317.462 - 11370.101: 95.5134% ( 9) 00:10:44.901 11370.101 - 11422.741: 95.5804% ( 9) 00:10:44.901 11422.741 - 11475.380: 95.6324% ( 7) 00:10:44.901 11475.380 - 11528.019: 95.7068% ( 10) 00:10:44.901 11528.019 - 11580.659: 95.7440% ( 5) 00:10:44.901 11580.659 - 11633.298: 95.7961% ( 7) 00:10:44.901 11633.298 - 11685.937: 95.8408% ( 6) 00:10:44.901 11685.937 - 11738.577: 95.9003% ( 8) 00:10:44.901 11738.577 - 11791.216: 95.9375% ( 5) 00:10:44.901 11791.216 - 11843.855: 95.9896% ( 7) 00:10:44.901 11843.855 - 11896.495: 96.0342% ( 6) 00:10:44.901 11896.495 - 11949.134: 96.0863% ( 7) 00:10:44.901 11949.134 - 12001.773: 96.1384% ( 7) 00:10:44.901 12001.773 - 12054.413: 96.1830% ( 6) 00:10:44.901 12054.413 - 12107.052: 96.2426% ( 8) 00:10:44.901 12107.052 - 12159.692: 96.2649% ( 3) 00:10:44.901 12159.692 - 12212.331: 96.3095% ( 6) 00:10:44.901 12212.331 - 12264.970: 96.3318% ( 3) 00:10:44.901 12264.970 - 12317.610: 96.3690% ( 5) 00:10:44.901 12317.610 - 12370.249: 96.4286% ( 8) 00:10:44.901 12370.249 - 12422.888: 96.4509% ( 3) 00:10:44.901 12422.888 - 12475.528: 96.4807% ( 4) 00:10:44.901 12475.528 - 12528.167: 96.5104% ( 4) 00:10:44.901 12528.167 - 12580.806: 96.5402% ( 4) 00:10:44.901 12580.806 - 12633.446: 96.5699% ( 4) 00:10:44.901 12633.446 - 12686.085: 96.6146% ( 6) 00:10:44.901 12686.085 - 12738.724: 96.6443% ( 4) 00:10:44.901 12738.724 - 12791.364: 96.6964% ( 7) 00:10:44.901 12791.364 - 12844.003: 96.7336% ( 5) 00:10:44.901 12844.003 - 12896.643: 96.7708% ( 5) 00:10:44.901 12896.643 - 12949.282: 96.8155% ( 6) 00:10:44.901 12949.282 - 13001.921: 96.8527% ( 5) 00:10:44.901 13001.921 - 13054.561: 96.8750% ( 3) 00:10:44.901 13054.561 - 13107.200: 96.9122% ( 5) 00:10:44.901 13107.200 - 13159.839: 96.9420% ( 4) 00:10:44.901 13159.839 - 13212.479: 96.9568% ( 2) 00:10:44.901 13212.479 - 13265.118: 96.9792% ( 3) 00:10:44.901 13265.118 - 13317.757: 97.0164% ( 5) 00:10:44.901 13317.757 - 13370.397: 97.0685% ( 7) 00:10:44.901 13370.397 - 13423.036: 97.0908% ( 3) 00:10:44.901 13423.036 - 13475.676: 97.1280% ( 5) 00:10:44.901 13475.676 - 13580.954: 97.1875% ( 8) 00:10:44.901 13580.954 - 13686.233: 97.2842% ( 13) 00:10:44.901 13686.233 - 13791.512: 97.3958% ( 15) 00:10:44.901 13791.512 - 13896.790: 97.5074% ( 15) 00:10:44.901 13896.790 - 14002.069: 97.6265% ( 16) 00:10:44.901 14002.069 - 14107.348: 97.7455% ( 16) 00:10:44.901 14107.348 - 14212.627: 97.8646% ( 16) 00:10:44.901 14212.627 - 14317.905: 97.9762% ( 15) 00:10:44.901 14317.905 - 14423.184: 98.1027% ( 17) 00:10:44.901 14423.184 - 14528.463: 98.2068% ( 14) 00:10:44.901 14528.463 - 14633.741: 98.3110% ( 14) 00:10:44.901 14633.741 - 14739.020: 98.4003% ( 12) 00:10:44.901 14739.020 - 14844.299: 98.5193% ( 16) 00:10:44.901 14844.299 - 14949.578: 98.6086% ( 12) 00:10:44.901 14949.578 - 15054.856: 98.6905% ( 11) 00:10:44.901 15054.856 - 15160.135: 98.8021% ( 15) 00:10:44.901 15160.135 - 15265.414: 98.8765% ( 10) 00:10:44.901 15265.414 - 15370.692: 98.9062% ( 4) 00:10:44.901 15370.692 - 15475.971: 98.9286% ( 3) 00:10:44.901 15475.971 - 15581.250: 98.9509% ( 3) 00:10:44.901 15581.250 - 15686.529: 98.9658% ( 2) 00:10:44.901 15686.529 - 15791.807: 98.9955% ( 4) 00:10:44.901 15791.807 - 15897.086: 99.0104% ( 2) 00:10:44.901 15897.086 - 16002.365: 99.0327% ( 3) 00:10:44.901 16002.365 - 16107.643: 99.0476% ( 2) 00:10:44.901 49902.111 - 50112.668: 99.0699% ( 3) 00:10:44.901 50112.668 - 50323.226: 99.0997% ( 4) 00:10:44.901 50323.226 - 50533.783: 99.1146% ( 2) 00:10:44.901 50533.783 - 50744.341: 99.1443% ( 4) 00:10:44.901 50744.341 - 50954.898: 99.1741% ( 4) 00:10:44.901 50954.898 - 51165.455: 99.1964% ( 3) 00:10:44.901 51165.455 - 51376.013: 99.2188% ( 3) 00:10:44.901 51376.013 - 51586.570: 99.2485% ( 4) 00:10:44.901 51586.570 - 51797.128: 99.2783% ( 4) 00:10:44.901 51797.128 - 52007.685: 99.3080% ( 4) 00:10:44.901 52007.685 - 52218.243: 99.3304% ( 3) 00:10:44.901 52218.243 - 52428.800: 99.3527% ( 3) 00:10:44.901 52428.800 - 52639.357: 99.3750% ( 3) 00:10:44.901 52639.357 - 52849.915: 99.4122% ( 5) 00:10:44.901 52849.915 - 53060.472: 99.4345% ( 3) 00:10:44.901 53060.472 - 53271.030: 99.4643% ( 4) 00:10:44.901 53271.030 - 53481.587: 99.4866% ( 3) 00:10:44.901 53481.587 - 53692.145: 99.5089% ( 3) 00:10:44.901 53692.145 - 53902.702: 99.5238% ( 2) 00:10:44.901 59798.310 - 60219.425: 99.5461% ( 3) 00:10:44.901 60219.425 - 60640.540: 99.6057% ( 8) 00:10:44.901 60640.540 - 61061.655: 99.6503% ( 6) 00:10:44.901 61061.655 - 61482.769: 99.7173% ( 9) 00:10:44.901 61482.769 - 61903.884: 99.7768% ( 8) 00:10:44.901 61903.884 - 62324.999: 99.8363% ( 8) 00:10:44.901 62324.999 - 62746.114: 99.8958% ( 8) 00:10:44.901 62746.114 - 63167.229: 99.9554% ( 8) 00:10:44.901 63167.229 - 63588.344: 100.0000% ( 6) 00:10:44.901 00:10:44.901 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:44.901 ============================================================================== 00:10:44.901 Range in us Cumulative IO count 00:10:44.901 7843.264 - 7895.904: 0.0223% ( 3) 00:10:44.901 7895.904 - 7948.543: 0.0521% ( 4) 00:10:44.901 7948.543 - 8001.182: 0.1116% ( 8) 00:10:44.901 8001.182 - 8053.822: 0.4315% ( 43) 00:10:44.901 8053.822 - 8106.461: 1.0714% ( 86) 00:10:44.901 8106.461 - 8159.100: 2.1057% ( 139) 00:10:44.901 8159.100 - 8211.740: 3.6235% ( 204) 00:10:44.901 8211.740 - 8264.379: 5.7664% ( 288) 00:10:44.901 8264.379 - 8317.018: 8.2515% ( 334) 00:10:44.901 8317.018 - 8369.658: 11.2054% ( 397) 00:10:44.901 8369.658 - 8422.297: 14.4196% ( 432) 00:10:44.901 8422.297 - 8474.937: 18.1027% ( 495) 00:10:44.901 8474.937 - 8527.576: 21.9717% ( 520) 00:10:44.901 8527.576 - 8580.215: 25.9077% ( 529) 00:10:44.901 8580.215 - 8632.855: 30.1562% ( 571) 00:10:44.901 8632.855 - 8685.494: 34.5908% ( 596) 00:10:44.901 8685.494 - 8738.133: 39.1667% ( 615) 00:10:44.901 8738.133 - 8790.773: 43.8616% ( 631) 00:10:44.901 8790.773 - 8843.412: 48.6756% ( 647) 00:10:44.901 8843.412 - 8896.051: 53.2589% ( 616) 00:10:44.901 8896.051 - 8948.691: 57.6860% ( 595) 00:10:44.901 8948.691 - 9001.330: 61.7783% ( 550) 00:10:44.901 9001.330 - 9053.969: 65.4315% ( 491) 00:10:44.901 9053.969 - 9106.609: 68.6756% ( 436) 00:10:44.901 9106.609 - 9159.248: 71.7262% ( 410) 00:10:44.901 9159.248 - 9211.888: 74.2932% ( 345) 00:10:44.901 9211.888 - 9264.527: 76.5551% ( 304) 00:10:44.901 9264.527 - 9317.166: 78.5491% ( 268) 00:10:44.901 9317.166 - 9369.806: 80.3720% ( 245) 00:10:44.901 9369.806 - 9422.445: 82.0759% ( 229) 00:10:44.901 9422.445 - 9475.084: 83.5640% ( 200) 00:10:44.901 9475.084 - 9527.724: 84.8512% ( 173) 00:10:44.901 9527.724 - 9580.363: 86.0119% ( 156) 00:10:44.901 9580.363 - 9633.002: 87.0387% ( 138) 00:10:44.901 9633.002 - 9685.642: 87.9539% ( 123) 00:10:44.901 9685.642 - 9738.281: 88.8690% ( 123) 00:10:44.901 9738.281 - 9790.920: 89.6726% ( 108) 00:10:44.901 9790.920 - 9843.560: 90.2827% ( 82) 00:10:44.901 9843.560 - 9896.199: 90.8705% ( 79) 00:10:44.901 9896.199 - 9948.839: 91.3914% ( 70) 00:10:44.901 9948.839 - 10001.478: 91.8676% ( 64) 00:10:44.901 10001.478 - 10054.117: 92.3289% ( 62) 00:10:44.901 10054.117 - 10106.757: 92.6711% ( 46) 00:10:44.901 10106.757 - 10159.396: 92.9762% ( 41) 00:10:44.901 10159.396 - 10212.035: 93.2515% ( 37) 00:10:44.901 10212.035 - 10264.675: 93.5491% ( 40) 00:10:44.901 10264.675 - 10317.314: 93.7946% ( 33) 00:10:44.901 10317.314 - 10369.953: 93.9955% ( 27) 00:10:44.901 10369.953 - 10422.593: 94.1518% ( 21) 00:10:44.901 10422.593 - 10475.232: 94.2932% ( 19) 00:10:44.901 10475.232 - 10527.871: 94.4122% ( 16) 00:10:44.901 10527.871 - 10580.511: 94.4866% ( 10) 00:10:44.901 10580.511 - 10633.150: 94.5685% ( 11) 00:10:44.901 10633.150 - 10685.790: 94.6726% ( 14) 00:10:44.901 10685.790 - 10738.429: 94.7545% ( 11) 00:10:44.901 10738.429 - 10791.068: 94.8512% ( 13) 00:10:44.901 10791.068 - 10843.708: 94.9330% ( 11) 00:10:44.901 10843.708 - 10896.347: 94.9851% ( 7) 00:10:44.901 10896.347 - 10948.986: 95.0372% ( 7) 00:10:44.901 10948.986 - 11001.626: 95.0967% ( 8) 00:10:44.901 11001.626 - 11054.265: 95.1562% ( 8) 00:10:44.901 11054.265 - 11106.904: 95.2232% ( 9) 00:10:44.901 11106.904 - 11159.544: 95.2827% ( 8) 00:10:44.901 11159.544 - 11212.183: 95.3497% ( 9) 00:10:44.901 11212.183 - 11264.822: 95.4092% ( 8) 00:10:44.901 11264.822 - 11317.462: 95.4688% ( 8) 00:10:44.901 11317.462 - 11370.101: 95.5283% ( 8) 00:10:44.901 11370.101 - 11422.741: 95.5952% ( 9) 00:10:44.901 11422.741 - 11475.380: 95.6548% ( 8) 00:10:44.901 11475.380 - 11528.019: 95.6994% ( 6) 00:10:44.901 11528.019 - 11580.659: 95.7440% ( 6) 00:10:44.901 11580.659 - 11633.298: 95.7664% ( 3) 00:10:44.901 11633.298 - 11685.937: 95.7812% ( 2) 00:10:44.901 11685.937 - 11738.577: 95.8110% ( 4) 00:10:44.901 11738.577 - 11791.216: 95.8259% ( 2) 00:10:44.901 11791.216 - 11843.855: 95.8482% ( 3) 00:10:44.901 11843.855 - 11896.495: 95.8854% ( 5) 00:10:44.901 11896.495 - 11949.134: 95.9152% ( 4) 00:10:44.902 11949.134 - 12001.773: 95.9747% ( 8) 00:10:44.902 12001.773 - 12054.413: 96.0417% ( 9) 00:10:44.902 12054.413 - 12107.052: 96.0938% ( 7) 00:10:44.902 12107.052 - 12159.692: 96.1533% ( 8) 00:10:44.902 12159.692 - 12212.331: 96.2277% ( 10) 00:10:44.902 12212.331 - 12264.970: 96.2798% ( 7) 00:10:44.902 12264.970 - 12317.610: 96.3616% ( 11) 00:10:44.902 12317.610 - 12370.249: 96.4360% ( 10) 00:10:44.902 12370.249 - 12422.888: 96.5030% ( 9) 00:10:44.902 12422.888 - 12475.528: 96.5625% ( 8) 00:10:44.902 12475.528 - 12528.167: 96.5997% ( 5) 00:10:44.902 12528.167 - 12580.806: 96.6667% ( 9) 00:10:44.902 12580.806 - 12633.446: 96.7188% ( 7) 00:10:44.902 12633.446 - 12686.085: 96.7708% ( 7) 00:10:44.902 12686.085 - 12738.724: 96.8229% ( 7) 00:10:44.902 12738.724 - 12791.364: 96.8601% ( 5) 00:10:44.902 12791.364 - 12844.003: 96.8973% ( 5) 00:10:44.902 12844.003 - 12896.643: 96.9420% ( 6) 00:10:44.902 12896.643 - 12949.282: 96.9792% ( 5) 00:10:44.902 12949.282 - 13001.921: 97.0238% ( 6) 00:10:44.902 13001.921 - 13054.561: 97.0610% ( 5) 00:10:44.902 13054.561 - 13107.200: 97.1057% ( 6) 00:10:44.902 13107.200 - 13159.839: 97.1429% ( 5) 00:10:44.902 13159.839 - 13212.479: 97.1726% ( 4) 00:10:44.902 13212.479 - 13265.118: 97.2247% ( 7) 00:10:44.902 13265.118 - 13317.757: 97.2917% ( 9) 00:10:44.902 13317.757 - 13370.397: 97.3438% ( 7) 00:10:44.902 13370.397 - 13423.036: 97.3958% ( 7) 00:10:44.902 13423.036 - 13475.676: 97.4479% ( 7) 00:10:44.902 13475.676 - 13580.954: 97.5446% ( 13) 00:10:44.902 13580.954 - 13686.233: 97.6116% ( 9) 00:10:44.902 13686.233 - 13791.512: 97.6562% ( 6) 00:10:44.902 13791.512 - 13896.790: 97.7009% ( 6) 00:10:44.902 13896.790 - 14002.069: 97.7455% ( 6) 00:10:44.902 14002.069 - 14107.348: 97.7902% ( 6) 00:10:44.902 14107.348 - 14212.627: 97.8571% ( 9) 00:10:44.902 14212.627 - 14317.905: 97.9092% ( 7) 00:10:44.902 14317.905 - 14423.184: 98.0060% ( 13) 00:10:44.902 14423.184 - 14528.463: 98.1027% ( 13) 00:10:44.902 14528.463 - 14633.741: 98.1920% ( 12) 00:10:44.902 14633.741 - 14739.020: 98.3036% ( 15) 00:10:44.902 14739.020 - 14844.299: 98.3780% ( 10) 00:10:44.902 14844.299 - 14949.578: 98.4598% ( 11) 00:10:44.902 14949.578 - 15054.856: 98.5491% ( 12) 00:10:44.902 15054.856 - 15160.135: 98.6384% ( 12) 00:10:44.902 15160.135 - 15265.414: 98.7277% ( 12) 00:10:44.902 15265.414 - 15370.692: 98.8170% ( 12) 00:10:44.902 15370.692 - 15475.971: 98.8839% ( 9) 00:10:44.902 15475.971 - 15581.250: 98.9509% ( 9) 00:10:44.902 15581.250 - 15686.529: 98.9955% ( 6) 00:10:44.902 15686.529 - 15791.807: 99.0253% ( 4) 00:10:44.902 15791.807 - 15897.086: 99.0476% ( 3) 00:10:44.902 47164.864 - 47375.422: 99.0699% ( 3) 00:10:44.902 47375.422 - 47585.979: 99.0923% ( 3) 00:10:44.902 47585.979 - 47796.537: 99.1220% ( 4) 00:10:44.902 47796.537 - 48007.094: 99.1518% ( 4) 00:10:44.902 48007.094 - 48217.651: 99.1815% ( 4) 00:10:44.902 48217.651 - 48428.209: 99.2039% ( 3) 00:10:44.902 48428.209 - 48638.766: 99.2336% ( 4) 00:10:44.902 48638.766 - 48849.324: 99.2634% ( 4) 00:10:44.902 48849.324 - 49059.881: 99.2857% ( 3) 00:10:44.902 49059.881 - 49270.439: 99.3155% ( 4) 00:10:44.902 49270.439 - 49480.996: 99.3452% ( 4) 00:10:44.902 49480.996 - 49691.553: 99.3750% ( 4) 00:10:44.902 49691.553 - 49902.111: 99.4048% ( 4) 00:10:44.902 49902.111 - 50112.668: 99.4345% ( 4) 00:10:44.902 50112.668 - 50323.226: 99.4643% ( 4) 00:10:44.902 50323.226 - 50533.783: 99.4866% ( 3) 00:10:44.902 50533.783 - 50744.341: 99.5164% ( 4) 00:10:44.902 50744.341 - 50954.898: 99.5238% ( 1) 00:10:44.902 56429.391 - 56850.506: 99.5312% ( 1) 00:10:44.902 56850.506 - 57271.621: 99.5908% ( 8) 00:10:44.902 57271.621 - 57692.736: 99.6577% ( 9) 00:10:44.902 57692.736 - 58113.851: 99.7173% ( 8) 00:10:44.902 58113.851 - 58534.965: 99.7768% ( 8) 00:10:44.902 58534.965 - 58956.080: 99.8512% ( 10) 00:10:44.902 58956.080 - 59377.195: 99.9107% ( 8) 00:10:44.902 59377.195 - 59798.310: 99.9777% ( 9) 00:10:44.902 59798.310 - 60219.425: 100.0000% ( 3) 00:10:44.902 00:10:44.902 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:44.902 ============================================================================== 00:10:44.902 Range in us Cumulative IO count 00:10:44.902 7843.264 - 7895.904: 0.0223% ( 3) 00:10:44.902 7895.904 - 7948.543: 0.0521% ( 4) 00:10:44.902 7948.543 - 8001.182: 0.1562% ( 14) 00:10:44.902 8001.182 - 8053.822: 0.4613% ( 41) 00:10:44.902 8053.822 - 8106.461: 1.1161% ( 88) 00:10:44.902 8106.461 - 8159.100: 2.1280% ( 136) 00:10:44.902 8159.100 - 8211.740: 3.7202% ( 214) 00:10:44.902 8211.740 - 8264.379: 5.9449% ( 299) 00:10:44.902 8264.379 - 8317.018: 8.5342% ( 348) 00:10:44.902 8317.018 - 8369.658: 11.2723% ( 368) 00:10:44.902 8369.658 - 8422.297: 14.4420% ( 426) 00:10:44.902 8422.297 - 8474.937: 18.1101% ( 493) 00:10:44.902 8474.937 - 8527.576: 21.7932% ( 495) 00:10:44.902 8527.576 - 8580.215: 25.7068% ( 526) 00:10:44.902 8580.215 - 8632.855: 29.8810% ( 561) 00:10:44.902 8632.855 - 8685.494: 34.2932% ( 593) 00:10:44.902 8685.494 - 8738.133: 38.7574% ( 600) 00:10:44.902 8738.133 - 8790.773: 43.4598% ( 632) 00:10:44.902 8790.773 - 8843.412: 48.1473% ( 630) 00:10:44.902 8843.412 - 8896.051: 52.9241% ( 642) 00:10:44.902 8896.051 - 8948.691: 57.3065% ( 589) 00:10:44.902 8948.691 - 9001.330: 61.3988% ( 550) 00:10:44.902 9001.330 - 9053.969: 65.1116% ( 499) 00:10:44.902 9053.969 - 9106.609: 68.4152% ( 444) 00:10:44.902 9106.609 - 9159.248: 71.3542% ( 395) 00:10:44.902 9159.248 - 9211.888: 73.9211% ( 345) 00:10:44.902 9211.888 - 9264.527: 76.3244% ( 323) 00:10:44.902 9264.527 - 9317.166: 78.5045% ( 293) 00:10:44.902 9317.166 - 9369.806: 80.4911% ( 267) 00:10:44.902 9369.806 - 9422.445: 82.2173% ( 232) 00:10:44.902 9422.445 - 9475.084: 83.7649% ( 208) 00:10:44.902 9475.084 - 9527.724: 85.1414% ( 185) 00:10:44.902 9527.724 - 9580.363: 86.3021% ( 156) 00:10:44.902 9580.363 - 9633.002: 87.2991% ( 134) 00:10:44.902 9633.002 - 9685.642: 88.2068% ( 122) 00:10:44.902 9685.642 - 9738.281: 89.0402% ( 112) 00:10:44.902 9738.281 - 9790.920: 89.7768% ( 99) 00:10:44.902 9790.920 - 9843.560: 90.4613% ( 92) 00:10:44.902 9843.560 - 9896.199: 91.0789% ( 83) 00:10:44.902 9896.199 - 9948.839: 91.6369% ( 75) 00:10:44.902 9948.839 - 10001.478: 92.1057% ( 63) 00:10:44.902 10001.478 - 10054.117: 92.5000% ( 53) 00:10:44.902 10054.117 - 10106.757: 92.8051% ( 41) 00:10:44.902 10106.757 - 10159.396: 93.1101% ( 41) 00:10:44.902 10159.396 - 10212.035: 93.3631% ( 34) 00:10:44.902 10212.035 - 10264.675: 93.6012% ( 32) 00:10:44.902 10264.675 - 10317.314: 93.7872% ( 25) 00:10:44.902 10317.314 - 10369.953: 93.9583% ( 23) 00:10:44.902 10369.953 - 10422.593: 94.1443% ( 25) 00:10:44.902 10422.593 - 10475.232: 94.3155% ( 23) 00:10:44.902 10475.232 - 10527.871: 94.4568% ( 19) 00:10:44.902 10527.871 - 10580.511: 94.6057% ( 20) 00:10:44.902 10580.511 - 10633.150: 94.7024% ( 13) 00:10:44.902 10633.150 - 10685.790: 94.7693% ( 9) 00:10:44.902 10685.790 - 10738.429: 94.8214% ( 7) 00:10:44.902 10738.429 - 10791.068: 94.8735% ( 7) 00:10:44.902 10791.068 - 10843.708: 94.9182% ( 6) 00:10:44.902 10843.708 - 10896.347: 94.9702% ( 7) 00:10:44.902 10896.347 - 10948.986: 95.0223% ( 7) 00:10:44.902 10948.986 - 11001.626: 95.0670% ( 6) 00:10:44.902 11001.626 - 11054.265: 95.1116% ( 6) 00:10:44.902 11054.265 - 11106.904: 95.1562% ( 6) 00:10:44.902 11106.904 - 11159.544: 95.2381% ( 11) 00:10:44.902 11159.544 - 11212.183: 95.2902% ( 7) 00:10:44.902 11212.183 - 11264.822: 95.3497% ( 8) 00:10:44.902 11264.822 - 11317.462: 95.4092% ( 8) 00:10:44.902 11317.462 - 11370.101: 95.4613% ( 7) 00:10:44.902 11370.101 - 11422.741: 95.5208% ( 8) 00:10:44.902 11422.741 - 11475.380: 95.5878% ( 9) 00:10:44.902 11475.380 - 11528.019: 95.6473% ( 8) 00:10:44.902 11528.019 - 11580.659: 95.6845% ( 5) 00:10:44.902 11580.659 - 11633.298: 95.7068% ( 3) 00:10:44.902 11633.298 - 11685.937: 95.7366% ( 4) 00:10:44.902 11685.937 - 11738.577: 95.7664% ( 4) 00:10:44.902 11738.577 - 11791.216: 95.8185% ( 7) 00:10:44.902 11791.216 - 11843.855: 95.8557% ( 5) 00:10:44.902 11843.855 - 11896.495: 95.8854% ( 4) 00:10:44.902 11896.495 - 11949.134: 95.9449% ( 8) 00:10:44.902 11949.134 - 12001.773: 96.0119% ( 9) 00:10:44.902 12001.773 - 12054.413: 96.0640% ( 7) 00:10:44.902 12054.413 - 12107.052: 96.1458% ( 11) 00:10:44.902 12107.052 - 12159.692: 96.2128% ( 9) 00:10:44.902 12159.692 - 12212.331: 96.2872% ( 10) 00:10:44.902 12212.331 - 12264.970: 96.3616% ( 10) 00:10:44.902 12264.970 - 12317.610: 96.4062% ( 6) 00:10:44.902 12317.610 - 12370.249: 96.4732% ( 9) 00:10:44.902 12370.249 - 12422.888: 96.5253% ( 7) 00:10:44.902 12422.888 - 12475.528: 96.5923% ( 9) 00:10:44.902 12475.528 - 12528.167: 96.6518% ( 8) 00:10:44.902 12528.167 - 12580.806: 96.6964% ( 6) 00:10:44.902 12580.806 - 12633.446: 96.7560% ( 8) 00:10:44.902 12633.446 - 12686.085: 96.8080% ( 7) 00:10:44.902 12686.085 - 12738.724: 96.8527% ( 6) 00:10:44.902 12738.724 - 12791.364: 96.8899% ( 5) 00:10:44.902 12791.364 - 12844.003: 96.9271% ( 5) 00:10:44.902 12844.003 - 12896.643: 96.9643% ( 5) 00:10:44.902 12896.643 - 12949.282: 97.0089% ( 6) 00:10:44.902 12949.282 - 13001.921: 97.0536% ( 6) 00:10:44.902 13001.921 - 13054.561: 97.0982% ( 6) 00:10:44.902 13054.561 - 13107.200: 97.1652% ( 9) 00:10:44.902 13107.200 - 13159.839: 97.2098% ( 6) 00:10:44.902 13159.839 - 13212.479: 97.2619% ( 7) 00:10:44.902 13212.479 - 13265.118: 97.3214% ( 8) 00:10:44.902 13265.118 - 13317.757: 97.3661% ( 6) 00:10:44.902 13317.757 - 13370.397: 97.4256% ( 8) 00:10:44.902 13370.397 - 13423.036: 97.4926% ( 9) 00:10:44.902 13423.036 - 13475.676: 97.5595% ( 9) 00:10:44.902 13475.676 - 13580.954: 97.6935% ( 18) 00:10:44.902 13580.954 - 13686.233: 97.8051% ( 15) 00:10:44.903 13686.233 - 13791.512: 97.8720% ( 9) 00:10:44.903 13791.512 - 13896.790: 97.9539% ( 11) 00:10:44.903 13896.790 - 14002.069: 98.0134% ( 8) 00:10:44.903 14002.069 - 14107.348: 98.0655% ( 7) 00:10:44.903 14107.348 - 14212.627: 98.1399% ( 10) 00:10:44.903 14212.627 - 14317.905: 98.2217% ( 11) 00:10:44.903 14317.905 - 14423.184: 98.2887% ( 9) 00:10:44.903 14423.184 - 14528.463: 98.3705% ( 11) 00:10:44.903 14528.463 - 14633.741: 98.4449% ( 10) 00:10:44.903 14633.741 - 14739.020: 98.5268% ( 11) 00:10:44.903 14739.020 - 14844.299: 98.6012% ( 10) 00:10:44.903 14844.299 - 14949.578: 98.6830% ( 11) 00:10:44.903 14949.578 - 15054.856: 98.7500% ( 9) 00:10:44.903 15054.856 - 15160.135: 98.7946% ( 6) 00:10:44.903 15160.135 - 15265.414: 98.8467% ( 7) 00:10:44.903 15265.414 - 15370.692: 98.9062% ( 8) 00:10:44.903 15370.692 - 15475.971: 98.9583% ( 7) 00:10:44.903 15475.971 - 15581.250: 98.9955% ( 5) 00:10:44.903 15581.250 - 15686.529: 99.0179% ( 3) 00:10:44.903 15686.529 - 15791.807: 99.0476% ( 4) 00:10:44.903 44217.060 - 44427.618: 99.0699% ( 3) 00:10:44.903 44427.618 - 44638.175: 99.0997% ( 4) 00:10:44.903 44638.175 - 44848.733: 99.1295% ( 4) 00:10:44.903 44848.733 - 45059.290: 99.1592% ( 4) 00:10:44.903 45059.290 - 45269.847: 99.1890% ( 4) 00:10:44.903 45269.847 - 45480.405: 99.2113% ( 3) 00:10:44.903 45480.405 - 45690.962: 99.2336% ( 3) 00:10:44.903 45690.962 - 45901.520: 99.2634% ( 4) 00:10:44.903 45901.520 - 46112.077: 99.2932% ( 4) 00:10:44.903 46112.077 - 46322.635: 99.3229% ( 4) 00:10:44.903 46322.635 - 46533.192: 99.3527% ( 4) 00:10:44.903 46533.192 - 46743.749: 99.3824% ( 4) 00:10:44.903 46743.749 - 46954.307: 99.4122% ( 4) 00:10:44.903 46954.307 - 47164.864: 99.4420% ( 4) 00:10:44.903 47164.864 - 47375.422: 99.4717% ( 4) 00:10:44.903 47375.422 - 47585.979: 99.4940% ( 3) 00:10:44.903 47585.979 - 47796.537: 99.5238% ( 4) 00:10:44.903 53692.145 - 53902.702: 99.5536% ( 4) 00:10:44.903 53902.702 - 54323.817: 99.6131% ( 8) 00:10:44.903 54323.817 - 54744.932: 99.6801% ( 9) 00:10:44.903 54744.932 - 55166.047: 99.7396% ( 8) 00:10:44.903 55166.047 - 55587.161: 99.8065% ( 9) 00:10:44.903 55587.161 - 56008.276: 99.8735% ( 9) 00:10:44.903 56008.276 - 56429.391: 99.9405% ( 9) 00:10:44.903 56429.391 - 56850.506: 100.0000% ( 8) 00:10:44.903 00:10:44.903 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:44.903 ============================================================================== 00:10:44.903 Range in us Cumulative IO count 00:10:44.903 7843.264 - 7895.904: 0.0223% ( 3) 00:10:44.903 7895.904 - 7948.543: 0.1190% ( 13) 00:10:44.903 7948.543 - 8001.182: 0.2455% ( 17) 00:10:44.903 8001.182 - 8053.822: 0.6250% ( 51) 00:10:44.903 8053.822 - 8106.461: 1.3095% ( 92) 00:10:44.903 8106.461 - 8159.100: 2.3735% ( 143) 00:10:44.903 8159.100 - 8211.740: 3.9732% ( 215) 00:10:44.903 8211.740 - 8264.379: 6.0491% ( 279) 00:10:44.903 8264.379 - 8317.018: 8.6086% ( 344) 00:10:44.903 8317.018 - 8369.658: 11.3467% ( 368) 00:10:44.903 8369.658 - 8422.297: 14.4048% ( 411) 00:10:44.903 8422.297 - 8474.937: 17.7455% ( 449) 00:10:44.903 8474.937 - 8527.576: 21.4509% ( 498) 00:10:44.903 8527.576 - 8580.215: 25.4688% ( 540) 00:10:44.903 8580.215 - 8632.855: 29.6280% ( 559) 00:10:44.903 8632.855 - 8685.494: 34.0402% ( 593) 00:10:44.903 8685.494 - 8738.133: 38.5938% ( 612) 00:10:44.903 8738.133 - 8790.773: 43.3259% ( 636) 00:10:44.903 8790.773 - 8843.412: 48.0208% ( 631) 00:10:44.903 8843.412 - 8896.051: 52.5893% ( 614) 00:10:44.903 8896.051 - 8948.691: 57.0164% ( 595) 00:10:44.903 8948.691 - 9001.330: 61.1682% ( 558) 00:10:44.903 9001.330 - 9053.969: 64.9256% ( 505) 00:10:44.903 9053.969 - 9106.609: 68.2589% ( 448) 00:10:44.903 9106.609 - 9159.248: 71.1384% ( 387) 00:10:44.903 9159.248 - 9211.888: 73.8542% ( 365) 00:10:44.903 9211.888 - 9264.527: 76.2574% ( 323) 00:10:44.903 9264.527 - 9317.166: 78.5342% ( 306) 00:10:44.903 9317.166 - 9369.806: 80.4762% ( 261) 00:10:44.903 9369.806 - 9422.445: 82.1205% ( 221) 00:10:44.903 9422.445 - 9475.084: 83.5342% ( 190) 00:10:44.903 9475.084 - 9527.724: 84.8363% ( 175) 00:10:44.903 9527.724 - 9580.363: 85.9375% ( 148) 00:10:44.903 9580.363 - 9633.002: 86.9420% ( 135) 00:10:44.903 9633.002 - 9685.642: 87.8571% ( 123) 00:10:44.903 9685.642 - 9738.281: 88.7426% ( 119) 00:10:44.903 9738.281 - 9790.920: 89.5238% ( 105) 00:10:44.903 9790.920 - 9843.560: 90.2455% ( 97) 00:10:44.903 9843.560 - 9896.199: 90.8631% ( 83) 00:10:44.903 9896.199 - 9948.839: 91.4583% ( 80) 00:10:44.903 9948.839 - 10001.478: 92.0089% ( 74) 00:10:44.903 10001.478 - 10054.117: 92.5000% ( 66) 00:10:44.903 10054.117 - 10106.757: 92.8646% ( 49) 00:10:44.903 10106.757 - 10159.396: 93.1920% ( 44) 00:10:44.903 10159.396 - 10212.035: 93.4152% ( 30) 00:10:44.903 10212.035 - 10264.675: 93.6458% ( 31) 00:10:44.903 10264.675 - 10317.314: 93.8318% ( 25) 00:10:44.903 10317.314 - 10369.953: 94.0179% ( 25) 00:10:44.903 10369.953 - 10422.593: 94.1890% ( 23) 00:10:44.903 10422.593 - 10475.232: 94.3378% ( 20) 00:10:44.903 10475.232 - 10527.871: 94.4940% ( 21) 00:10:44.903 10527.871 - 10580.511: 94.6354% ( 19) 00:10:44.903 10580.511 - 10633.150: 94.7842% ( 20) 00:10:44.903 10633.150 - 10685.790: 94.8661% ( 11) 00:10:44.903 10685.790 - 10738.429: 94.9554% ( 12) 00:10:44.903 10738.429 - 10791.068: 95.0446% ( 12) 00:10:44.903 10791.068 - 10843.708: 95.1116% ( 9) 00:10:44.903 10843.708 - 10896.347: 95.1786% ( 9) 00:10:44.903 10896.347 - 10948.986: 95.2381% ( 8) 00:10:44.903 10948.986 - 11001.626: 95.3051% ( 9) 00:10:44.903 11001.626 - 11054.265: 95.3497% ( 6) 00:10:44.903 11054.265 - 11106.904: 95.4018% ( 7) 00:10:44.903 11106.904 - 11159.544: 95.4539% ( 7) 00:10:44.903 11159.544 - 11212.183: 95.5060% ( 7) 00:10:44.903 11212.183 - 11264.822: 95.5655% ( 8) 00:10:44.903 11264.822 - 11317.462: 95.6324% ( 9) 00:10:44.903 11317.462 - 11370.101: 95.6548% ( 3) 00:10:44.903 11370.101 - 11422.741: 95.6920% ( 5) 00:10:44.903 11422.741 - 11475.380: 95.7217% ( 4) 00:10:44.903 11475.380 - 11528.019: 95.7664% ( 6) 00:10:44.903 11528.019 - 11580.659: 95.8110% ( 6) 00:10:44.903 11580.659 - 11633.298: 95.8631% ( 7) 00:10:44.903 11633.298 - 11685.937: 95.9077% ( 6) 00:10:44.903 11685.937 - 11738.577: 95.9598% ( 7) 00:10:44.903 11738.577 - 11791.216: 96.0045% ( 6) 00:10:44.903 11791.216 - 11843.855: 96.0714% ( 9) 00:10:44.903 11843.855 - 11896.495: 96.1086% ( 5) 00:10:44.903 11896.495 - 11949.134: 96.1384% ( 4) 00:10:44.903 11949.134 - 12001.773: 96.1756% ( 5) 00:10:44.903 12001.773 - 12054.413: 96.2277% ( 7) 00:10:44.903 12054.413 - 12107.052: 96.2798% ( 7) 00:10:44.903 12107.052 - 12159.692: 96.3244% ( 6) 00:10:44.903 12159.692 - 12212.331: 96.3839% ( 8) 00:10:44.903 12212.331 - 12264.970: 96.4211% ( 5) 00:10:44.903 12264.970 - 12317.610: 96.4658% ( 6) 00:10:44.903 12317.610 - 12370.249: 96.5030% ( 5) 00:10:44.903 12370.249 - 12422.888: 96.5402% ( 5) 00:10:44.903 12422.888 - 12475.528: 96.5848% ( 6) 00:10:44.903 12475.528 - 12528.167: 96.6220% ( 5) 00:10:44.903 12528.167 - 12580.806: 96.6667% ( 6) 00:10:44.903 12580.806 - 12633.446: 96.7262% ( 8) 00:10:44.903 12633.446 - 12686.085: 96.7932% ( 9) 00:10:44.903 12686.085 - 12738.724: 96.8601% ( 9) 00:10:44.903 12738.724 - 12791.364: 96.9494% ( 12) 00:10:44.903 12791.364 - 12844.003: 97.0312% ( 11) 00:10:44.903 12844.003 - 12896.643: 97.0982% ( 9) 00:10:44.903 12896.643 - 12949.282: 97.1726% ( 10) 00:10:44.903 12949.282 - 13001.921: 97.2321% ( 8) 00:10:44.903 13001.921 - 13054.561: 97.2842% ( 7) 00:10:44.903 13054.561 - 13107.200: 97.3586% ( 10) 00:10:44.903 13107.200 - 13159.839: 97.4033% ( 6) 00:10:44.903 13159.839 - 13212.479: 97.4702% ( 9) 00:10:44.903 13212.479 - 13265.118: 97.5372% ( 9) 00:10:44.903 13265.118 - 13317.757: 97.5967% ( 8) 00:10:44.903 13317.757 - 13370.397: 97.6414% ( 6) 00:10:44.903 13370.397 - 13423.036: 97.6711% ( 4) 00:10:44.903 13423.036 - 13475.676: 97.7083% ( 5) 00:10:44.903 13475.676 - 13580.954: 97.7902% ( 11) 00:10:44.903 13580.954 - 13686.233: 97.8646% ( 10) 00:10:44.903 13686.233 - 13791.512: 97.9539% ( 12) 00:10:44.903 13791.512 - 13896.790: 98.0357% ( 11) 00:10:44.903 13896.790 - 14002.069: 98.1324% ( 13) 00:10:44.903 14002.069 - 14107.348: 98.2143% ( 11) 00:10:44.903 14107.348 - 14212.627: 98.2738% ( 8) 00:10:44.903 14212.627 - 14317.905: 98.3185% ( 6) 00:10:44.903 14317.905 - 14423.184: 98.3631% ( 6) 00:10:44.903 14423.184 - 14528.463: 98.4003% ( 5) 00:10:44.903 14528.463 - 14633.741: 98.4821% ( 11) 00:10:44.903 14633.741 - 14739.020: 98.5491% ( 9) 00:10:44.903 14739.020 - 14844.299: 98.6086% ( 8) 00:10:44.903 14844.299 - 14949.578: 98.6756% ( 9) 00:10:44.903 14949.578 - 15054.856: 98.7351% ( 8) 00:10:44.903 15054.856 - 15160.135: 98.8021% ( 9) 00:10:44.903 15160.135 - 15265.414: 98.8393% ( 5) 00:10:44.903 15265.414 - 15370.692: 98.8765% ( 5) 00:10:44.903 15370.692 - 15475.971: 98.9211% ( 6) 00:10:44.903 15475.971 - 15581.250: 98.9583% ( 5) 00:10:44.903 15581.250 - 15686.529: 98.9955% ( 5) 00:10:44.903 15686.529 - 15791.807: 99.0402% ( 6) 00:10:44.903 15791.807 - 15897.086: 99.0476% ( 1) 00:10:44.903 41269.256 - 41479.814: 99.0699% ( 3) 00:10:44.903 41479.814 - 41690.371: 99.0923% ( 3) 00:10:44.903 41690.371 - 41900.929: 99.1220% ( 4) 00:10:44.903 41900.929 - 42111.486: 99.1592% ( 5) 00:10:44.903 42111.486 - 42322.043: 99.1890% ( 4) 00:10:44.903 42322.043 - 42532.601: 99.2262% ( 5) 00:10:44.903 42532.601 - 42743.158: 99.2485% ( 3) 00:10:44.903 42743.158 - 42953.716: 99.2857% ( 5) 00:10:44.903 42953.716 - 43164.273: 99.3080% ( 3) 00:10:44.903 43164.273 - 43374.831: 99.3452% ( 5) 00:10:44.903 43374.831 - 43585.388: 99.3750% ( 4) 00:10:44.904 43585.388 - 43795.945: 99.4048% ( 4) 00:10:44.904 43795.945 - 44006.503: 99.4420% ( 5) 00:10:44.904 44006.503 - 44217.060: 99.4717% ( 4) 00:10:44.904 44217.060 - 44427.618: 99.5015% ( 4) 00:10:44.904 44427.618 - 44638.175: 99.5238% ( 3) 00:10:44.904 50112.668 - 50323.226: 99.5312% ( 1) 00:10:44.904 50323.226 - 50533.783: 99.5610% ( 4) 00:10:44.904 50533.783 - 50744.341: 99.5982% ( 5) 00:10:44.904 50744.341 - 50954.898: 99.6280% ( 4) 00:10:44.904 50954.898 - 51165.455: 99.6577% ( 4) 00:10:44.904 51165.455 - 51376.013: 99.6875% ( 4) 00:10:44.904 51376.013 - 51586.570: 99.7247% ( 5) 00:10:44.904 51586.570 - 51797.128: 99.7545% ( 4) 00:10:44.904 51797.128 - 52007.685: 99.7842% ( 4) 00:10:44.904 52007.685 - 52218.243: 99.8214% ( 5) 00:10:44.904 52218.243 - 52428.800: 99.8512% ( 4) 00:10:44.904 52428.800 - 52639.357: 99.8810% ( 4) 00:10:44.904 52639.357 - 52849.915: 99.9107% ( 4) 00:10:44.904 52849.915 - 53060.472: 99.9479% ( 5) 00:10:44.904 53060.472 - 53271.030: 99.9777% ( 4) 00:10:44.904 53271.030 - 53481.587: 100.0000% ( 3) 00:10:44.904 00:10:44.904 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:44.904 ============================================================================== 00:10:44.904 Range in us Cumulative IO count 00:10:44.904 7790.625 - 7843.264: 0.0149% ( 2) 00:10:44.904 7843.264 - 7895.904: 0.0372% ( 3) 00:10:44.904 7895.904 - 7948.543: 0.1265% ( 12) 00:10:44.904 7948.543 - 8001.182: 0.2530% ( 17) 00:10:44.904 8001.182 - 8053.822: 0.6027% ( 47) 00:10:44.904 8053.822 - 8106.461: 1.3170% ( 96) 00:10:44.904 8106.461 - 8159.100: 2.3140% ( 134) 00:10:44.904 8159.100 - 8211.740: 4.1071% ( 241) 00:10:44.904 8211.740 - 8264.379: 6.1086% ( 269) 00:10:44.904 8264.379 - 8317.018: 8.5417% ( 327) 00:10:44.904 8317.018 - 8369.658: 11.3839% ( 382) 00:10:44.904 8369.658 - 8422.297: 14.4048% ( 406) 00:10:44.904 8422.297 - 8474.937: 17.8274% ( 460) 00:10:44.904 8474.937 - 8527.576: 21.5699% ( 503) 00:10:44.904 8527.576 - 8580.215: 25.6622% ( 550) 00:10:44.904 8580.215 - 8632.855: 29.8214% ( 559) 00:10:44.904 8632.855 - 8685.494: 34.2783% ( 599) 00:10:44.904 8685.494 - 8738.133: 38.9360% ( 626) 00:10:44.904 8738.133 - 8790.773: 43.6607% ( 635) 00:10:44.904 8790.773 - 8843.412: 48.3185% ( 626) 00:10:44.904 8843.412 - 8896.051: 52.8497% ( 609) 00:10:44.904 8896.051 - 8948.691: 57.2991% ( 598) 00:10:44.904 8948.691 - 9001.330: 61.3467% ( 544) 00:10:44.904 9001.330 - 9053.969: 64.9702% ( 487) 00:10:44.904 9053.969 - 9106.609: 68.2515% ( 441) 00:10:44.904 9106.609 - 9159.248: 71.1086% ( 384) 00:10:44.904 9159.248 - 9211.888: 73.7649% ( 357) 00:10:44.904 9211.888 - 9264.527: 76.1458% ( 320) 00:10:44.904 9264.527 - 9317.166: 78.3036% ( 290) 00:10:44.904 9317.166 - 9369.806: 80.1786% ( 252) 00:10:44.904 9369.806 - 9422.445: 81.8155% ( 220) 00:10:44.904 9422.445 - 9475.084: 83.3259% ( 203) 00:10:44.904 9475.084 - 9527.724: 84.5982% ( 171) 00:10:44.904 9527.724 - 9580.363: 85.8036% ( 162) 00:10:44.904 9580.363 - 9633.002: 86.7932% ( 133) 00:10:44.904 9633.002 - 9685.642: 87.6860% ( 120) 00:10:44.904 9685.642 - 9738.281: 88.5565% ( 117) 00:10:44.904 9738.281 - 9790.920: 89.3229% ( 103) 00:10:44.904 9790.920 - 9843.560: 90.0149% ( 93) 00:10:44.904 9843.560 - 9896.199: 90.6622% ( 87) 00:10:44.904 9896.199 - 9948.839: 91.2202% ( 75) 00:10:44.904 9948.839 - 10001.478: 91.7634% ( 73) 00:10:44.904 10001.478 - 10054.117: 92.2545% ( 66) 00:10:44.904 10054.117 - 10106.757: 92.6711% ( 56) 00:10:44.904 10106.757 - 10159.396: 93.0357% ( 49) 00:10:44.904 10159.396 - 10212.035: 93.3557% ( 43) 00:10:44.904 10212.035 - 10264.675: 93.6310% ( 37) 00:10:44.904 10264.675 - 10317.314: 93.8914% ( 35) 00:10:44.904 10317.314 - 10369.953: 94.0699% ( 24) 00:10:44.904 10369.953 - 10422.593: 94.2336% ( 22) 00:10:44.904 10422.593 - 10475.232: 94.3750% ( 19) 00:10:44.904 10475.232 - 10527.871: 94.5089% ( 18) 00:10:44.904 10527.871 - 10580.511: 94.6429% ( 18) 00:10:44.904 10580.511 - 10633.150: 94.7693% ( 17) 00:10:44.904 10633.150 - 10685.790: 94.8810% ( 15) 00:10:44.904 10685.790 - 10738.429: 94.9926% ( 15) 00:10:44.904 10738.429 - 10791.068: 95.0744% ( 11) 00:10:44.904 10791.068 - 10843.708: 95.1562% ( 11) 00:10:44.904 10843.708 - 10896.347: 95.2381% ( 11) 00:10:44.904 10896.347 - 10948.986: 95.3274% ( 12) 00:10:44.904 10948.986 - 11001.626: 95.4018% ( 10) 00:10:44.904 11001.626 - 11054.265: 95.4762% ( 10) 00:10:44.904 11054.265 - 11106.904: 95.5357% ( 8) 00:10:44.904 11106.904 - 11159.544: 95.6027% ( 9) 00:10:44.904 11159.544 - 11212.183: 95.6622% ( 8) 00:10:44.904 11212.183 - 11264.822: 95.7440% ( 11) 00:10:44.904 11264.822 - 11317.462: 95.8036% ( 8) 00:10:44.904 11317.462 - 11370.101: 95.8705% ( 9) 00:10:44.904 11370.101 - 11422.741: 95.9375% ( 9) 00:10:44.904 11422.741 - 11475.380: 96.0045% ( 9) 00:10:44.904 11475.380 - 11528.019: 96.0565% ( 7) 00:10:44.904 11528.019 - 11580.659: 96.0863% ( 4) 00:10:44.904 11580.659 - 11633.298: 96.1161% ( 4) 00:10:44.904 11633.298 - 11685.937: 96.1458% ( 4) 00:10:44.904 11685.937 - 11738.577: 96.1756% ( 4) 00:10:44.904 11738.577 - 11791.216: 96.2054% ( 4) 00:10:44.904 11791.216 - 11843.855: 96.2277% ( 3) 00:10:44.904 11843.855 - 11896.495: 96.2574% ( 4) 00:10:44.904 11896.495 - 11949.134: 96.2872% ( 4) 00:10:44.904 11949.134 - 12001.773: 96.3170% ( 4) 00:10:44.904 12001.773 - 12054.413: 96.3467% ( 4) 00:10:44.904 12054.413 - 12107.052: 96.3765% ( 4) 00:10:44.904 12107.052 - 12159.692: 96.4137% ( 5) 00:10:44.904 12159.692 - 12212.331: 96.4732% ( 8) 00:10:44.904 12212.331 - 12264.970: 96.5104% ( 5) 00:10:44.904 12264.970 - 12317.610: 96.5625% ( 7) 00:10:44.904 12317.610 - 12370.249: 96.6220% ( 8) 00:10:44.904 12370.249 - 12422.888: 96.6815% ( 8) 00:10:44.904 12422.888 - 12475.528: 96.7336% ( 7) 00:10:44.904 12475.528 - 12528.167: 96.7783% ( 6) 00:10:44.904 12528.167 - 12580.806: 96.8155% ( 5) 00:10:44.904 12580.806 - 12633.446: 96.8601% ( 6) 00:10:44.904 12633.446 - 12686.085: 96.9048% ( 6) 00:10:44.904 12686.085 - 12738.724: 96.9494% ( 6) 00:10:44.904 12738.724 - 12791.364: 96.9717% ( 3) 00:10:44.904 12791.364 - 12844.003: 96.9940% ( 3) 00:10:44.904 12844.003 - 12896.643: 97.0387% ( 6) 00:10:44.904 12896.643 - 12949.282: 97.0833% ( 6) 00:10:44.904 12949.282 - 13001.921: 97.1131% ( 4) 00:10:44.904 13001.921 - 13054.561: 97.1577% ( 6) 00:10:44.904 13054.561 - 13107.200: 97.2024% ( 6) 00:10:44.904 13107.200 - 13159.839: 97.2545% ( 7) 00:10:44.904 13159.839 - 13212.479: 97.3140% ( 8) 00:10:44.904 13212.479 - 13265.118: 97.3735% ( 8) 00:10:44.904 13265.118 - 13317.757: 97.4182% ( 6) 00:10:44.904 13317.757 - 13370.397: 97.4777% ( 8) 00:10:44.904 13370.397 - 13423.036: 97.5446% ( 9) 00:10:44.904 13423.036 - 13475.676: 97.5818% ( 5) 00:10:44.904 13475.676 - 13580.954: 97.7009% ( 16) 00:10:44.904 13580.954 - 13686.233: 97.8199% ( 16) 00:10:44.904 13686.233 - 13791.512: 97.9167% ( 13) 00:10:44.904 13791.512 - 13896.790: 98.0283% ( 15) 00:10:44.904 13896.790 - 14002.069: 98.1399% ( 15) 00:10:44.904 14002.069 - 14107.348: 98.2366% ( 13) 00:10:44.904 14107.348 - 14212.627: 98.3333% ( 13) 00:10:44.904 14212.627 - 14317.905: 98.4003% ( 9) 00:10:44.904 14317.905 - 14423.184: 98.4524% ( 7) 00:10:44.904 14423.184 - 14528.463: 98.4821% ( 4) 00:10:44.904 14528.463 - 14633.741: 98.5268% ( 6) 00:10:44.904 14633.741 - 14739.020: 98.5863% ( 8) 00:10:44.904 14739.020 - 14844.299: 98.6607% ( 10) 00:10:44.904 14844.299 - 14949.578: 98.7054% ( 6) 00:10:44.904 14949.578 - 15054.856: 98.7500% ( 6) 00:10:44.904 15054.856 - 15160.135: 98.7946% ( 6) 00:10:44.904 15160.135 - 15265.414: 98.8244% ( 4) 00:10:44.904 15265.414 - 15370.692: 98.8616% ( 5) 00:10:44.904 15370.692 - 15475.971: 98.8988% ( 5) 00:10:44.905 15475.971 - 15581.250: 98.9360% ( 5) 00:10:44.905 15581.250 - 15686.529: 98.9732% ( 5) 00:10:44.905 15686.529 - 15791.807: 99.0030% ( 4) 00:10:44.905 15791.807 - 15897.086: 99.0402% ( 5) 00:10:44.905 15897.086 - 16002.365: 99.0476% ( 1) 00:10:44.905 37900.337 - 38110.895: 99.0625% ( 2) 00:10:44.905 38110.895 - 38321.452: 99.0923% ( 4) 00:10:44.905 38321.452 - 38532.010: 99.1220% ( 4) 00:10:44.905 38532.010 - 38742.567: 99.1518% ( 4) 00:10:44.905 38742.567 - 38953.124: 99.1815% ( 4) 00:10:44.905 38953.124 - 39163.682: 99.2113% ( 4) 00:10:44.905 39163.682 - 39374.239: 99.2485% ( 5) 00:10:44.905 39374.239 - 39584.797: 99.2783% ( 4) 00:10:44.905 39584.797 - 39795.354: 99.3080% ( 4) 00:10:44.905 39795.354 - 40005.912: 99.3304% ( 3) 00:10:44.905 40005.912 - 40216.469: 99.3601% ( 4) 00:10:44.905 40216.469 - 40427.027: 99.3973% ( 5) 00:10:44.905 40427.027 - 40637.584: 99.4271% ( 4) 00:10:44.905 40637.584 - 40848.141: 99.4568% ( 4) 00:10:44.905 40848.141 - 41058.699: 99.4866% ( 4) 00:10:44.905 41058.699 - 41269.256: 99.5238% ( 5) 00:10:44.905 46743.749 - 46954.307: 99.5387% ( 2) 00:10:44.905 46954.307 - 47164.864: 99.5685% ( 4) 00:10:44.905 47164.864 - 47375.422: 99.5982% ( 4) 00:10:44.905 47375.422 - 47585.979: 99.6280% ( 4) 00:10:44.905 47585.979 - 47796.537: 99.6652% ( 5) 00:10:44.905 47796.537 - 48007.094: 99.6949% ( 4) 00:10:44.905 48007.094 - 48217.651: 99.7247% ( 4) 00:10:44.905 48217.651 - 48428.209: 99.7619% ( 5) 00:10:44.905 48428.209 - 48638.766: 99.7917% ( 4) 00:10:44.905 48638.766 - 48849.324: 99.8214% ( 4) 00:10:44.905 48849.324 - 49059.881: 99.8586% ( 5) 00:10:44.905 49059.881 - 49270.439: 99.8884% ( 4) 00:10:44.905 49270.439 - 49480.996: 99.9182% ( 4) 00:10:44.905 49480.996 - 49691.553: 99.9554% ( 5) 00:10:44.905 49691.553 - 49902.111: 99.9851% ( 4) 00:10:44.905 49902.111 - 50112.668: 100.0000% ( 2) 00:10:44.905 00:10:44.905 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:44.905 ============================================================================== 00:10:44.905 Range in us Cumulative IO count 00:10:44.905 7895.904 - 7948.543: 0.0222% ( 3) 00:10:44.905 7948.543 - 8001.182: 0.0963% ( 10) 00:10:44.905 8001.182 - 8053.822: 0.4591% ( 49) 00:10:44.905 8053.822 - 8106.461: 1.1108% ( 88) 00:10:44.905 8106.461 - 8159.100: 2.1401% ( 139) 00:10:44.905 8159.100 - 8211.740: 3.7989% ( 224) 00:10:44.905 8211.740 - 8264.379: 5.7761% ( 267) 00:10:44.905 8264.379 - 8317.018: 8.1309% ( 318) 00:10:44.905 8317.018 - 8369.658: 11.0708% ( 397) 00:10:44.905 8369.658 - 8422.297: 14.3365% ( 441) 00:10:44.905 8422.297 - 8474.937: 17.6466% ( 447) 00:10:44.905 8474.937 - 8527.576: 21.4011% ( 507) 00:10:44.905 8527.576 - 8580.215: 25.3629% ( 535) 00:10:44.905 8580.215 - 8632.855: 29.6801% ( 583) 00:10:44.905 8632.855 - 8685.494: 33.9825% ( 581) 00:10:44.905 8685.494 - 8738.133: 38.4775% ( 607) 00:10:44.905 8738.133 - 8790.773: 43.1650% ( 633) 00:10:44.905 8790.773 - 8843.412: 47.9414% ( 645) 00:10:44.905 8843.412 - 8896.051: 52.5770% ( 626) 00:10:44.905 8896.051 - 8948.691: 57.0201% ( 600) 00:10:44.905 8948.691 - 9001.330: 61.1745% ( 561) 00:10:44.905 9001.330 - 9053.969: 64.8549% ( 497) 00:10:44.905 9053.969 - 9106.609: 68.1132% ( 440) 00:10:44.905 9106.609 - 9159.248: 70.9864% ( 388) 00:10:44.905 9159.248 - 9211.888: 73.5412% ( 345) 00:10:44.905 9211.888 - 9264.527: 75.8738% ( 315) 00:10:44.905 9264.527 - 9317.166: 78.0509% ( 294) 00:10:44.905 9317.166 - 9369.806: 79.9541% ( 257) 00:10:44.905 9369.806 - 9422.445: 81.6721% ( 232) 00:10:44.905 9422.445 - 9475.084: 83.1754% ( 203) 00:10:44.905 9475.084 - 9527.724: 84.5379% ( 184) 00:10:44.905 9527.724 - 9580.363: 85.7005% ( 157) 00:10:44.905 9580.363 - 9633.002: 86.6410% ( 127) 00:10:44.905 9633.002 - 9685.642: 87.4926% ( 115) 00:10:44.905 9685.642 - 9738.281: 88.3812% ( 120) 00:10:44.905 9738.281 - 9790.920: 89.1143% ( 99) 00:10:44.905 9790.920 - 9843.560: 89.7660% ( 88) 00:10:44.905 9843.560 - 9896.199: 90.3658% ( 81) 00:10:44.905 9896.199 - 9948.839: 90.9138% ( 74) 00:10:44.905 9948.839 - 10001.478: 91.3803% ( 63) 00:10:44.905 10001.478 - 10054.117: 91.8691% ( 66) 00:10:44.905 10054.117 - 10106.757: 92.2986% ( 58) 00:10:44.905 10106.757 - 10159.396: 92.6836% ( 52) 00:10:44.905 10159.396 - 10212.035: 93.0095% ( 44) 00:10:44.905 10212.035 - 10264.675: 93.3279% ( 43) 00:10:44.905 10264.675 - 10317.314: 93.5797% ( 34) 00:10:44.905 10317.314 - 10369.953: 93.8092% ( 31) 00:10:44.905 10369.953 - 10422.593: 93.9648% ( 21) 00:10:44.905 10422.593 - 10475.232: 94.1129% ( 20) 00:10:44.905 10475.232 - 10527.871: 94.2239% ( 15) 00:10:44.905 10527.871 - 10580.511: 94.3202% ( 13) 00:10:44.905 10580.511 - 10633.150: 94.4165% ( 13) 00:10:44.905 10633.150 - 10685.790: 94.5127% ( 13) 00:10:44.905 10685.790 - 10738.429: 94.6016% ( 12) 00:10:44.905 10738.429 - 10791.068: 94.6757% ( 10) 00:10:44.905 10791.068 - 10843.708: 94.7497% ( 10) 00:10:44.905 10843.708 - 10896.347: 94.8460% ( 13) 00:10:44.905 10896.347 - 10948.986: 94.9348% ( 12) 00:10:44.905 10948.986 - 11001.626: 95.0015% ( 9) 00:10:44.905 11001.626 - 11054.265: 95.0977% ( 13) 00:10:44.905 11054.265 - 11106.904: 95.1644% ( 9) 00:10:44.905 11106.904 - 11159.544: 95.2162% ( 7) 00:10:44.905 11159.544 - 11212.183: 95.2829% ( 9) 00:10:44.905 11212.183 - 11264.822: 95.3495% ( 9) 00:10:44.905 11264.822 - 11317.462: 95.4088% ( 8) 00:10:44.905 11317.462 - 11370.101: 95.4532% ( 6) 00:10:44.905 11370.101 - 11422.741: 95.5050% ( 7) 00:10:44.905 11422.741 - 11475.380: 95.5495% ( 6) 00:10:44.905 11475.380 - 11528.019: 95.5939% ( 6) 00:10:44.905 11528.019 - 11580.659: 95.6383% ( 6) 00:10:44.905 11580.659 - 11633.298: 95.6828% ( 6) 00:10:44.905 11633.298 - 11685.937: 95.7494% ( 9) 00:10:44.905 11685.937 - 11738.577: 95.8012% ( 7) 00:10:44.905 11738.577 - 11791.216: 95.8679% ( 9) 00:10:44.905 11791.216 - 11843.855: 95.9271% ( 8) 00:10:44.905 11843.855 - 11896.495: 95.9864% ( 8) 00:10:44.905 11896.495 - 11949.134: 96.0530% ( 9) 00:10:44.905 11949.134 - 12001.773: 96.1123% ( 8) 00:10:44.905 12001.773 - 12054.413: 96.1789% ( 9) 00:10:44.905 12054.413 - 12107.052: 96.2382% ( 8) 00:10:44.905 12107.052 - 12159.692: 96.2826% ( 6) 00:10:44.905 12159.692 - 12212.331: 96.3122% ( 4) 00:10:44.905 12212.331 - 12264.970: 96.3418% ( 4) 00:10:44.905 12264.970 - 12317.610: 96.3640% ( 3) 00:10:44.905 12317.610 - 12370.249: 96.3863% ( 3) 00:10:44.905 12370.249 - 12422.888: 96.4011% ( 2) 00:10:44.905 12422.888 - 12475.528: 96.4159% ( 2) 00:10:44.905 12475.528 - 12528.167: 96.4233% ( 1) 00:10:44.905 12528.167 - 12580.806: 96.4455% ( 3) 00:10:44.905 12580.806 - 12633.446: 96.4899% ( 6) 00:10:44.905 12633.446 - 12686.085: 96.5047% ( 2) 00:10:44.905 12686.085 - 12738.724: 96.5270% ( 3) 00:10:44.905 12738.724 - 12791.364: 96.5566% ( 4) 00:10:44.905 12791.364 - 12844.003: 96.5862% ( 4) 00:10:44.905 12844.003 - 12896.643: 96.6010% ( 2) 00:10:44.905 12896.643 - 12949.282: 96.6306% ( 4) 00:10:44.905 12949.282 - 13001.921: 96.6751% ( 6) 00:10:44.905 13001.921 - 13054.561: 96.6973% ( 3) 00:10:44.905 13054.561 - 13107.200: 96.7417% ( 6) 00:10:44.905 13107.200 - 13159.839: 96.7787% ( 5) 00:10:44.905 13159.839 - 13212.479: 96.8084% ( 4) 00:10:44.905 13212.479 - 13265.118: 96.8528% ( 6) 00:10:44.905 13265.118 - 13317.757: 96.8972% ( 6) 00:10:44.905 13317.757 - 13370.397: 96.9268% ( 4) 00:10:44.905 13370.397 - 13423.036: 96.9861% ( 8) 00:10:44.905 13423.036 - 13475.676: 97.0527% ( 9) 00:10:44.905 13475.676 - 13580.954: 97.1712% ( 16) 00:10:44.905 13580.954 - 13686.233: 97.3193% ( 20) 00:10:44.905 13686.233 - 13791.512: 97.4452% ( 17) 00:10:44.905 13791.512 - 13896.790: 97.5563% ( 15) 00:10:44.905 13896.790 - 14002.069: 97.6600% ( 14) 00:10:44.905 14002.069 - 14107.348: 97.7710% ( 15) 00:10:44.905 14107.348 - 14212.627: 97.8673% ( 13) 00:10:44.905 14212.627 - 14317.905: 97.9784% ( 15) 00:10:44.905 14317.905 - 14423.184: 98.1043% ( 17) 00:10:44.905 14423.184 - 14528.463: 98.2079% ( 14) 00:10:44.905 14528.463 - 14633.741: 98.3116% ( 14) 00:10:44.905 14633.741 - 14739.020: 98.4079% ( 13) 00:10:44.905 14739.020 - 14844.299: 98.4671% ( 8) 00:10:44.905 14844.299 - 14949.578: 98.5264% ( 8) 00:10:44.905 14949.578 - 15054.856: 98.5930% ( 9) 00:10:44.905 15054.856 - 15160.135: 98.6523% ( 8) 00:10:44.905 15160.135 - 15265.414: 98.7041% ( 7) 00:10:44.905 15265.414 - 15370.692: 98.7559% ( 7) 00:10:44.905 15370.692 - 15475.971: 98.8152% ( 8) 00:10:44.905 15475.971 - 15581.250: 98.8818% ( 9) 00:10:44.905 15581.250 - 15686.529: 98.9336% ( 7) 00:10:44.905 15686.529 - 15791.807: 98.9781% ( 6) 00:10:44.905 15791.807 - 15897.086: 99.0151% ( 5) 00:10:44.905 15897.086 - 16002.365: 99.0521% ( 5) 00:10:44.905 26424.957 - 26530.236: 99.0595% ( 1) 00:10:44.905 26530.236 - 26635.515: 99.0669% ( 1) 00:10:44.905 26635.515 - 26740.794: 99.0818% ( 2) 00:10:44.905 26740.794 - 26846.072: 99.0966% ( 2) 00:10:44.905 26846.072 - 26951.351: 99.1114% ( 2) 00:10:44.905 26951.351 - 27161.908: 99.1410% ( 4) 00:10:44.905 27161.908 - 27372.466: 99.1706% ( 4) 00:10:44.905 27372.466 - 27583.023: 99.2002% ( 4) 00:10:44.905 27583.023 - 27793.581: 99.2299% ( 4) 00:10:44.905 27793.581 - 28004.138: 99.2595% ( 4) 00:10:44.905 28004.138 - 28214.696: 99.2891% ( 4) 00:10:44.905 28214.696 - 28425.253: 99.3187% ( 4) 00:10:44.905 28425.253 - 28635.810: 99.3483% ( 4) 00:10:44.905 28635.810 - 28846.368: 99.3780% ( 4) 00:10:44.905 28846.368 - 29056.925: 99.4002% ( 3) 00:10:44.905 29056.925 - 29267.483: 99.4298% ( 4) 00:10:44.905 29267.483 - 29478.040: 99.4520% ( 3) 00:10:44.905 29478.040 - 29688.598: 99.4816% ( 4) 00:10:44.905 29688.598 - 29899.155: 99.5113% ( 4) 00:10:44.905 29899.155 - 30109.712: 99.5261% ( 2) 00:10:44.906 36847.550 - 37058.108: 99.5409% ( 2) 00:10:44.906 37058.108 - 37268.665: 99.5779% ( 5) 00:10:44.906 37268.665 - 37479.222: 99.6075% ( 4) 00:10:44.906 37479.222 - 37689.780: 99.6371% ( 4) 00:10:44.906 37689.780 - 37900.337: 99.6742% ( 5) 00:10:44.906 37900.337 - 38110.895: 99.7038% ( 4) 00:10:44.906 38110.895 - 38321.452: 99.7408% ( 5) 00:10:44.906 38321.452 - 38532.010: 99.7704% ( 4) 00:10:44.906 38532.010 - 38742.567: 99.8001% ( 4) 00:10:44.906 38742.567 - 38953.124: 99.8297% ( 4) 00:10:44.906 38953.124 - 39163.682: 99.8593% ( 4) 00:10:44.906 39163.682 - 39374.239: 99.8889% ( 4) 00:10:44.906 39374.239 - 39584.797: 99.9185% ( 4) 00:10:44.906 39584.797 - 39795.354: 99.9556% ( 5) 00:10:44.906 39795.354 - 40005.912: 99.9852% ( 4) 00:10:44.906 40005.912 - 40216.469: 100.0000% ( 2) 00:10:44.906 00:10:44.906 20:17:14 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:46.283 Initializing NVMe Controllers 00:10:46.283 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:46.283 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:46.283 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:46.283 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:46.283 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:46.283 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:46.283 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:46.283 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:46.283 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:46.283 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:46.283 Initialization complete. Launching workers. 00:10:46.283 ======================================================== 00:10:46.283 Latency(us) 00:10:46.283 Device Information : IOPS MiB/s Average min max 00:10:46.283 PCIE (0000:00:10.0) NSID 1 from core 0: 10289.55 120.58 12463.63 7243.91 46844.39 00:10:46.283 PCIE (0000:00:11.0) NSID 1 from core 0: 10289.55 120.58 12441.11 7385.41 45406.99 00:10:46.283 PCIE (0000:00:13.0) NSID 1 from core 0: 10289.55 120.58 12417.88 7188.93 44263.58 00:10:46.283 PCIE (0000:00:12.0) NSID 1 from core 0: 10289.55 120.58 12394.91 7399.99 42764.89 00:10:46.283 PCIE (0000:00:12.0) NSID 2 from core 0: 10289.55 120.58 12372.01 7583.25 41056.95 00:10:46.283 PCIE (0000:00:12.0) NSID 3 from core 0: 10353.46 121.33 12273.70 7387.08 31092.82 00:10:46.283 ======================================================== 00:10:46.283 Total : 61801.23 724.23 12393.75 7188.93 46844.39 00:10:46.283 00:10:46.283 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:46.283 ================================================================================= 00:10:46.283 1.00000% : 7843.264us 00:10:46.283 10.00000% : 9159.248us 00:10:46.283 25.00000% : 9948.839us 00:10:46.283 50.00000% : 11738.577us 00:10:46.283 75.00000% : 13686.233us 00:10:46.283 90.00000% : 16739.316us 00:10:46.283 95.00000% : 18529.054us 00:10:46.283 98.00000% : 20318.792us 00:10:46.283 99.00000% : 34320.861us 00:10:46.283 99.50000% : 44638.175us 00:10:46.283 99.90000% : 46533.192us 00:10:46.283 99.99000% : 46954.307us 00:10:46.283 99.99900% : 46954.307us 00:10:46.283 99.99990% : 46954.307us 00:10:46.283 99.99999% : 46954.307us 00:10:46.283 00:10:46.283 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:46.283 ================================================================================= 00:10:46.283 1.00000% : 7843.264us 00:10:46.283 10.00000% : 9159.248us 00:10:46.283 25.00000% : 9948.839us 00:10:46.283 50.00000% : 11791.216us 00:10:46.283 75.00000% : 13580.954us 00:10:46.283 90.00000% : 16949.873us 00:10:46.283 95.00000% : 18318.496us 00:10:46.283 98.00000% : 20739.907us 00:10:46.283 99.00000% : 32846.959us 00:10:46.283 99.50000% : 43164.273us 00:10:46.283 99.90000% : 45059.290us 00:10:46.283 99.99000% : 45480.405us 00:10:46.283 99.99900% : 45480.405us 00:10:46.283 99.99990% : 45480.405us 00:10:46.283 99.99999% : 45480.405us 00:10:46.283 00:10:46.283 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:46.283 ================================================================================= 00:10:46.283 1.00000% : 8001.182us 00:10:46.283 10.00000% : 9106.609us 00:10:46.283 25.00000% : 9948.839us 00:10:46.283 50.00000% : 11791.216us 00:10:46.283 75.00000% : 13475.676us 00:10:46.283 90.00000% : 16949.873us 00:10:46.283 95.00000% : 18213.218us 00:10:46.283 98.00000% : 20634.628us 00:10:46.283 99.00000% : 32004.729us 00:10:46.283 99.50000% : 42322.043us 00:10:46.283 99.90000% : 44006.503us 00:10:46.283 99.99000% : 44427.618us 00:10:46.284 99.99900% : 44427.618us 00:10:46.284 99.99990% : 44427.618us 00:10:46.284 99.99999% : 44427.618us 00:10:46.284 00:10:46.284 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:46.284 ================================================================================= 00:10:46.284 1.00000% : 7948.543us 00:10:46.284 10.00000% : 9211.888us 00:10:46.284 25.00000% : 9948.839us 00:10:46.284 50.00000% : 11791.216us 00:10:46.284 75.00000% : 13423.036us 00:10:46.284 90.00000% : 16634.037us 00:10:46.284 95.00000% : 18423.775us 00:10:46.284 98.00000% : 20529.349us 00:10:46.284 99.00000% : 31162.500us 00:10:46.284 99.50000% : 40848.141us 00:10:46.284 99.90000% : 42532.601us 00:10:46.284 99.99000% : 42743.158us 00:10:46.284 99.99900% : 42953.716us 00:10:46.284 99.99990% : 42953.716us 00:10:46.284 99.99999% : 42953.716us 00:10:46.284 00:10:46.284 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:46.284 ================================================================================= 00:10:46.284 1.00000% : 7843.264us 00:10:46.284 10.00000% : 9159.248us 00:10:46.284 25.00000% : 9948.839us 00:10:46.284 50.00000% : 11791.216us 00:10:46.284 75.00000% : 13580.954us 00:10:46.284 90.00000% : 16528.758us 00:10:46.284 95.00000% : 18634.333us 00:10:46.284 98.00000% : 20002.956us 00:10:46.284 99.00000% : 29478.040us 00:10:46.284 99.50000% : 39163.682us 00:10:46.284 99.90000% : 40848.141us 00:10:46.284 99.99000% : 41058.699us 00:10:46.284 99.99900% : 41058.699us 00:10:46.284 99.99990% : 41058.699us 00:10:46.284 99.99999% : 41058.699us 00:10:46.284 00:10:46.284 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:46.284 ================================================================================= 00:10:46.284 1.00000% : 7895.904us 00:10:46.284 10.00000% : 9159.248us 00:10:46.284 25.00000% : 10001.478us 00:10:46.284 50.00000% : 11791.216us 00:10:46.284 75.00000% : 13580.954us 00:10:46.284 90.00000% : 16844.594us 00:10:46.284 95.00000% : 18529.054us 00:10:46.284 98.00000% : 19792.398us 00:10:46.284 99.00000% : 20529.349us 00:10:46.284 99.50000% : 29056.925us 00:10:46.284 99.90000% : 30741.385us 00:10:46.284 99.99000% : 31162.500us 00:10:46.284 99.99900% : 31162.500us 00:10:46.284 99.99990% : 31162.500us 00:10:46.284 99.99999% : 31162.500us 00:10:46.284 00:10:46.284 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:46.284 ============================================================================== 00:10:46.284 Range in us Cumulative IO count 00:10:46.284 7211.592 - 7264.231: 0.0097% ( 1) 00:10:46.284 7369.510 - 7422.149: 0.0194% ( 1) 00:10:46.284 7422.149 - 7474.789: 0.0679% ( 5) 00:10:46.284 7474.789 - 7527.428: 0.1165% ( 5) 00:10:46.284 7527.428 - 7580.067: 0.2038% ( 9) 00:10:46.284 7580.067 - 7632.707: 0.4173% ( 22) 00:10:46.284 7632.707 - 7685.346: 0.7376% ( 33) 00:10:46.284 7685.346 - 7737.986: 0.8734% ( 14) 00:10:46.284 7737.986 - 7790.625: 0.9899% ( 12) 00:10:46.284 7790.625 - 7843.264: 1.2034% ( 22) 00:10:46.284 7843.264 - 7895.904: 1.4266% ( 23) 00:10:46.284 7895.904 - 7948.543: 1.6887% ( 27) 00:10:46.284 7948.543 - 8001.182: 1.8245% ( 14) 00:10:46.284 8001.182 - 8053.822: 1.9604% ( 14) 00:10:46.284 8053.822 - 8106.461: 2.1254% ( 17) 00:10:46.284 8106.461 - 8159.100: 2.2807% ( 16) 00:10:46.284 8159.100 - 8211.740: 2.4457% ( 17) 00:10:46.284 8211.740 - 8264.379: 2.6786% ( 24) 00:10:46.284 8264.379 - 8317.018: 2.9988% ( 33) 00:10:46.284 8317.018 - 8369.658: 3.3676% ( 38) 00:10:46.284 8369.658 - 8422.297: 3.8820% ( 53) 00:10:46.284 8422.297 - 8474.937: 4.6002% ( 74) 00:10:46.284 8474.937 - 8527.576: 4.8913% ( 30) 00:10:46.284 8527.576 - 8580.215: 5.4445% ( 57) 00:10:46.284 8580.215 - 8632.855: 5.7939% ( 36) 00:10:46.284 8632.855 - 8685.494: 6.1141% ( 33) 00:10:46.284 8685.494 - 8738.133: 6.3470% ( 24) 00:10:46.284 8738.133 - 8790.773: 6.6576% ( 32) 00:10:46.284 8790.773 - 8843.412: 7.0943% ( 45) 00:10:46.284 8843.412 - 8896.051: 7.6572% ( 58) 00:10:46.284 8896.051 - 8948.691: 8.3075% ( 67) 00:10:46.284 8948.691 - 9001.330: 8.8606% ( 57) 00:10:46.284 9001.330 - 9053.969: 9.2585% ( 41) 00:10:46.284 9053.969 - 9106.609: 9.8797% ( 64) 00:10:46.284 9106.609 - 9159.248: 10.6172% ( 76) 00:10:46.284 9159.248 - 9211.888: 11.4907% ( 90) 00:10:46.284 9211.888 - 9264.527: 12.2089% ( 74) 00:10:46.284 9264.527 - 9317.166: 12.9270% ( 74) 00:10:46.284 9317.166 - 9369.806: 13.7908% ( 89) 00:10:46.284 9369.806 - 9422.445: 14.5380% ( 77) 00:10:46.284 9422.445 - 9475.084: 15.5474% ( 104) 00:10:46.284 9475.084 - 9527.724: 16.5567% ( 104) 00:10:46.284 9527.724 - 9580.363: 17.6727% ( 115) 00:10:46.284 9580.363 - 9633.002: 18.7597% ( 112) 00:10:46.284 9633.002 - 9685.642: 19.9631% ( 124) 00:10:46.284 9685.642 - 9738.281: 21.0113% ( 108) 00:10:46.284 9738.281 - 9790.920: 21.9526% ( 97) 00:10:46.284 9790.920 - 9843.560: 23.2628% ( 135) 00:10:46.284 9843.560 - 9896.199: 24.3207% ( 109) 00:10:46.284 9896.199 - 9948.839: 25.3106% ( 102) 00:10:46.284 9948.839 - 10001.478: 26.4266% ( 115) 00:10:46.284 10001.478 - 10054.117: 27.3292% ( 93) 00:10:46.284 10054.117 - 10106.757: 28.1638% ( 86) 00:10:46.284 10106.757 - 10159.396: 28.7558% ( 61) 00:10:46.284 10159.396 - 10212.035: 29.2799% ( 54) 00:10:46.284 10212.035 - 10264.675: 29.8719% ( 61) 00:10:46.284 10264.675 - 10317.314: 30.5318% ( 68) 00:10:46.284 10317.314 - 10369.953: 31.0753% ( 56) 00:10:46.284 10369.953 - 10422.593: 31.6382% ( 58) 00:10:46.284 10422.593 - 10475.232: 32.1137% ( 49) 00:10:46.284 10475.232 - 10527.871: 32.6863% ( 59) 00:10:46.284 10527.871 - 10580.511: 33.1036% ( 43) 00:10:46.284 10580.511 - 10633.150: 33.5016% ( 41) 00:10:46.284 10633.150 - 10685.790: 33.9577% ( 47) 00:10:46.284 10685.790 - 10738.429: 34.3265% ( 38) 00:10:46.284 10738.429 - 10791.068: 34.7535% ( 44) 00:10:46.284 10791.068 - 10843.708: 35.2679% ( 53) 00:10:46.284 10843.708 - 10896.347: 35.7240% ( 47) 00:10:46.284 10896.347 - 10948.986: 36.0928% ( 38) 00:10:46.284 10948.986 - 11001.626: 36.3548% ( 27) 00:10:46.284 11001.626 - 11054.265: 36.6266% ( 28) 00:10:46.284 11054.265 - 11106.904: 37.1797% ( 57) 00:10:46.284 11106.904 - 11159.544: 37.7038% ( 54) 00:10:46.284 11159.544 - 11212.183: 38.1405% ( 45) 00:10:46.284 11212.183 - 11264.822: 38.8296% ( 71) 00:10:46.284 11264.822 - 11317.462: 39.8583% ( 106) 00:10:46.284 11317.462 - 11370.101: 41.0520% ( 123) 00:10:46.284 11370.101 - 11422.741: 41.9061% ( 88) 00:10:46.284 11422.741 - 11475.380: 42.8668% ( 99) 00:10:46.284 11475.380 - 11528.019: 44.4488% ( 163) 00:10:46.284 11528.019 - 11580.659: 46.4965% ( 211) 00:10:46.284 11580.659 - 11633.298: 48.2531% ( 181) 00:10:46.284 11633.298 - 11685.937: 49.8544% ( 165) 00:10:46.284 11685.937 - 11738.577: 51.7275% ( 193) 00:10:46.284 11738.577 - 11791.216: 52.8727% ( 118) 00:10:46.284 11791.216 - 11843.855: 53.9887% ( 115) 00:10:46.284 11843.855 - 11896.495: 55.0854% ( 113) 00:10:46.284 11896.495 - 11949.134: 56.6091% ( 157) 00:10:46.284 11949.134 - 12001.773: 57.8125% ( 124) 00:10:46.284 12001.773 - 12054.413: 59.0741% ( 130) 00:10:46.284 12054.413 - 12107.052: 60.3261% ( 129) 00:10:46.284 12107.052 - 12159.692: 61.3354% ( 104) 00:10:46.284 12159.692 - 12212.331: 62.2186% ( 91) 00:10:46.284 12212.331 - 12264.970: 62.9950% ( 80) 00:10:46.284 12264.970 - 12317.610: 63.8393% ( 87) 00:10:46.284 12317.610 - 12370.249: 64.5769% ( 76) 00:10:46.284 12370.249 - 12422.888: 65.4018% ( 85) 00:10:46.284 12422.888 - 12475.528: 66.4984% ( 113) 00:10:46.284 12475.528 - 12528.167: 67.2748% ( 80) 00:10:46.284 12528.167 - 12580.806: 67.8863% ( 63) 00:10:46.284 12580.806 - 12633.446: 68.4200% ( 55) 00:10:46.284 12633.446 - 12686.085: 68.8276% ( 42) 00:10:46.284 12686.085 - 12738.724: 69.1673% ( 35) 00:10:46.284 12738.724 - 12791.364: 69.5749% ( 42) 00:10:46.284 12791.364 - 12844.003: 70.1378% ( 58) 00:10:46.284 12844.003 - 12896.643: 70.7395% ( 62) 00:10:46.284 12896.643 - 12949.282: 71.4577% ( 74) 00:10:46.284 12949.282 - 13001.921: 72.0012% ( 56) 00:10:46.284 13001.921 - 13054.561: 72.4282% ( 44) 00:10:46.284 13054.561 - 13107.200: 72.7679% ( 35) 00:10:46.284 13107.200 - 13159.839: 73.0687% ( 31) 00:10:46.284 13159.839 - 13212.479: 73.3307% ( 27) 00:10:46.284 13212.479 - 13265.118: 73.5443% ( 22) 00:10:46.284 13265.118 - 13317.757: 73.8645% ( 33) 00:10:46.284 13317.757 - 13370.397: 74.0489% ( 19) 00:10:46.284 13370.397 - 13423.036: 74.2527% ( 21) 00:10:46.284 13423.036 - 13475.676: 74.4662% ( 22) 00:10:46.284 13475.676 - 13580.954: 74.9806% ( 53) 00:10:46.284 13580.954 - 13686.233: 75.6211% ( 66) 00:10:46.284 13686.233 - 13791.512: 76.2519% ( 65) 00:10:46.284 13791.512 - 13896.790: 76.7566% ( 52) 00:10:46.284 13896.790 - 14002.069: 77.5136% ( 78) 00:10:46.284 14002.069 - 14107.348: 78.0571% ( 56) 00:10:46.284 14107.348 - 14212.627: 78.4841% ( 44) 00:10:46.284 14212.627 - 14317.905: 79.0955% ( 63) 00:10:46.284 14317.905 - 14423.184: 79.5419% ( 46) 00:10:46.284 14423.184 - 14528.463: 80.4445% ( 93) 00:10:46.284 14528.463 - 14633.741: 81.0559% ( 63) 00:10:46.284 14633.741 - 14739.020: 81.4053% ( 36) 00:10:46.284 14739.020 - 14844.299: 81.7255% ( 33) 00:10:46.284 14844.299 - 14949.578: 82.1623% ( 45) 00:10:46.284 14949.578 - 15054.856: 82.8125% ( 67) 00:10:46.284 15054.856 - 15160.135: 83.4239% ( 63) 00:10:46.284 15160.135 - 15265.414: 84.2100% ( 81) 00:10:46.284 15265.414 - 15370.692: 84.9379% ( 75) 00:10:46.284 15370.692 - 15475.971: 85.5105% ( 59) 00:10:46.284 15475.971 - 15581.250: 85.9181% ( 42) 00:10:46.284 15581.250 - 15686.529: 86.3742% ( 47) 00:10:46.284 15686.529 - 15791.807: 86.8207% ( 46) 00:10:46.284 15791.807 - 15897.086: 87.3059% ( 50) 00:10:46.284 15897.086 - 16002.365: 87.6262% ( 33) 00:10:46.284 16002.365 - 16107.643: 88.0144% ( 40) 00:10:46.284 16107.643 - 16212.922: 88.3832% ( 38) 00:10:46.284 16212.922 - 16318.201: 88.6743% ( 30) 00:10:46.284 16318.201 - 16423.480: 88.9363% ( 27) 00:10:46.284 16423.480 - 16528.758: 89.3634% ( 44) 00:10:46.284 16528.758 - 16634.037: 89.6545% ( 30) 00:10:46.284 16634.037 - 16739.316: 90.0718% ( 43) 00:10:46.284 16739.316 - 16844.594: 90.3921% ( 33) 00:10:46.284 16844.594 - 16949.873: 90.6250% ( 24) 00:10:46.284 16949.873 - 17055.152: 90.9744% ( 36) 00:10:46.284 17055.152 - 17160.431: 91.4596% ( 50) 00:10:46.284 17160.431 - 17265.709: 91.8187% ( 37) 00:10:46.284 17265.709 - 17370.988: 92.1778% ( 37) 00:10:46.284 17370.988 - 17476.267: 92.4884% ( 32) 00:10:46.284 17476.267 - 17581.545: 92.7116% ( 23) 00:10:46.284 17581.545 - 17686.824: 92.9348% ( 23) 00:10:46.284 17686.824 - 17792.103: 93.1774% ( 25) 00:10:46.284 17792.103 - 17897.382: 93.5074% ( 34) 00:10:46.284 17897.382 - 18002.660: 93.8568% ( 36) 00:10:46.284 18002.660 - 18107.939: 94.1770% ( 33) 00:10:46.284 18107.939 - 18213.218: 94.4585% ( 29) 00:10:46.284 18213.218 - 18318.496: 94.6817% ( 23) 00:10:46.284 18318.496 - 18423.775: 94.8758% ( 20) 00:10:46.284 18423.775 - 18529.054: 95.0796% ( 21) 00:10:46.284 18529.054 - 18634.333: 95.3028% ( 23) 00:10:46.284 18634.333 - 18739.611: 95.5648% ( 27) 00:10:46.284 18739.611 - 18844.890: 95.7395% ( 18) 00:10:46.284 18844.890 - 18950.169: 95.9239% ( 19) 00:10:46.284 18950.169 - 19055.447: 96.1083% ( 19) 00:10:46.284 19055.447 - 19160.726: 96.3898% ( 29) 00:10:46.284 19160.726 - 19266.005: 96.6421% ( 26) 00:10:46.284 19266.005 - 19371.284: 96.9818% ( 35) 00:10:46.284 19371.284 - 19476.562: 97.0885% ( 11) 00:10:46.284 19476.562 - 19581.841: 97.1953% ( 11) 00:10:46.284 19581.841 - 19687.120: 97.2535% ( 6) 00:10:46.284 19687.120 - 19792.398: 97.3505% ( 10) 00:10:46.284 19792.398 - 19897.677: 97.4961% ( 15) 00:10:46.284 19897.677 - 20002.956: 97.6514% ( 16) 00:10:46.284 20002.956 - 20108.235: 97.8067% ( 16) 00:10:46.284 20108.235 - 20213.513: 97.9620% ( 16) 00:10:46.284 20213.513 - 20318.792: 98.0784% ( 12) 00:10:46.284 20318.792 - 20424.071: 98.1852% ( 11) 00:10:46.284 20424.071 - 20529.349: 98.2434% ( 6) 00:10:46.284 20529.349 - 20634.628: 98.2919% ( 5) 00:10:46.284 20634.628 - 20739.907: 98.3599% ( 7) 00:10:46.284 20739.907 - 20845.186: 98.4181% ( 6) 00:10:46.284 20845.186 - 20950.464: 98.4860% ( 7) 00:10:46.284 20950.464 - 21055.743: 98.5345% ( 5) 00:10:46.284 21055.743 - 21161.022: 98.5928% ( 6) 00:10:46.284 21161.022 - 21266.300: 98.6316% ( 4) 00:10:46.284 21266.300 - 21371.579: 98.6607% ( 3) 00:10:46.284 21371.579 - 21476.858: 98.6801% ( 2) 00:10:46.284 21476.858 - 21582.137: 98.7189% ( 4) 00:10:46.284 21582.137 - 21687.415: 98.7481% ( 3) 00:10:46.284 21687.415 - 21792.694: 98.7578% ( 1) 00:10:46.284 33268.074 - 33478.631: 98.7772% ( 2) 00:10:46.284 33478.631 - 33689.189: 98.8548% ( 8) 00:10:46.284 33689.189 - 33899.746: 98.9227% ( 7) 00:10:46.284 33899.746 - 34110.304: 98.9713% ( 5) 00:10:46.284 34110.304 - 34320.861: 99.0295% ( 6) 00:10:46.284 34320.861 - 34531.418: 99.0974% ( 7) 00:10:46.284 34531.418 - 34741.976: 99.1557% ( 6) 00:10:46.284 34741.976 - 34952.533: 99.2333% ( 8) 00:10:46.284 34952.533 - 35163.091: 99.2915% ( 6) 00:10:46.284 35163.091 - 35373.648: 99.3692% ( 8) 00:10:46.284 35373.648 - 35584.206: 99.3789% ( 1) 00:10:46.284 43795.945 - 44006.503: 99.4177% ( 4) 00:10:46.284 44006.503 - 44217.060: 99.4662% ( 5) 00:10:46.284 44217.060 - 44427.618: 99.4953% ( 3) 00:10:46.284 44427.618 - 44638.175: 99.5342% ( 4) 00:10:46.284 44638.175 - 44848.733: 99.5924% ( 6) 00:10:46.284 44848.733 - 45059.290: 99.6312% ( 4) 00:10:46.284 45059.290 - 45269.847: 99.6700% ( 4) 00:10:46.284 45269.847 - 45480.405: 99.7283% ( 6) 00:10:46.284 45480.405 - 45690.962: 99.7574% ( 3) 00:10:46.284 45690.962 - 45901.520: 99.8059% ( 5) 00:10:46.284 45901.520 - 46112.077: 99.8544% ( 5) 00:10:46.284 46112.077 - 46322.635: 99.8835% ( 3) 00:10:46.284 46322.635 - 46533.192: 99.9418% ( 6) 00:10:46.284 46533.192 - 46743.749: 99.9806% ( 4) 00:10:46.284 46743.749 - 46954.307: 100.0000% ( 2) 00:10:46.284 00:10:46.284 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:46.284 ============================================================================== 00:10:46.284 Range in us Cumulative IO count 00:10:46.284 7369.510 - 7422.149: 0.0097% ( 1) 00:10:46.284 7422.149 - 7474.789: 0.0388% ( 3) 00:10:46.284 7474.789 - 7527.428: 0.0873% ( 5) 00:10:46.284 7527.428 - 7580.067: 0.1165% ( 3) 00:10:46.284 7580.067 - 7632.707: 0.3106% ( 20) 00:10:46.284 7632.707 - 7685.346: 0.6211% ( 32) 00:10:46.284 7685.346 - 7737.986: 0.6793% ( 6) 00:10:46.284 7737.986 - 7790.625: 0.8055% ( 13) 00:10:46.284 7790.625 - 7843.264: 1.0967% ( 30) 00:10:46.284 7843.264 - 7895.904: 1.1646% ( 7) 00:10:46.284 7895.904 - 7948.543: 1.2519% ( 9) 00:10:46.284 7948.543 - 8001.182: 1.3781% ( 13) 00:10:46.284 8001.182 - 8053.822: 1.6498% ( 28) 00:10:46.284 8053.822 - 8106.461: 2.1157% ( 48) 00:10:46.284 8106.461 - 8159.100: 2.4748% ( 37) 00:10:46.284 8159.100 - 8211.740: 2.6786% ( 21) 00:10:46.284 8211.740 - 8264.379: 3.0377% ( 37) 00:10:46.284 8264.379 - 8317.018: 3.1832% ( 15) 00:10:46.284 8317.018 - 8369.658: 3.5908% ( 42) 00:10:46.284 8369.658 - 8422.297: 3.8238% ( 24) 00:10:46.284 8422.297 - 8474.937: 4.0858% ( 27) 00:10:46.284 8474.937 - 8527.576: 4.4158% ( 34) 00:10:46.284 8527.576 - 8580.215: 4.8234% ( 42) 00:10:46.284 8580.215 - 8632.855: 5.3280% ( 52) 00:10:46.284 8632.855 - 8685.494: 5.5998% ( 28) 00:10:46.284 8685.494 - 8738.133: 5.9686% ( 38) 00:10:46.284 8738.133 - 8790.773: 6.4053% ( 45) 00:10:46.284 8790.773 - 8843.412: 6.9488% ( 56) 00:10:46.284 8843.412 - 8896.051: 7.4922% ( 56) 00:10:46.284 8896.051 - 8948.691: 8.0842% ( 61) 00:10:46.284 8948.691 - 9001.330: 8.6762% ( 61) 00:10:46.284 9001.330 - 9053.969: 9.1712% ( 51) 00:10:46.284 9053.969 - 9106.609: 9.6176% ( 46) 00:10:46.284 9106.609 - 9159.248: 10.2484% ( 65) 00:10:46.284 9159.248 - 9211.888: 10.9375% ( 71) 00:10:46.284 9211.888 - 9264.527: 11.8789% ( 97) 00:10:46.284 9264.527 - 9317.166: 12.8979% ( 105) 00:10:46.284 9317.166 - 9369.806: 14.0528% ( 119) 00:10:46.284 9369.806 - 9422.445: 15.1495% ( 113) 00:10:46.284 9422.445 - 9475.084: 16.2073% ( 109) 00:10:46.284 9475.084 - 9527.724: 17.2651% ( 109) 00:10:46.284 9527.724 - 9580.363: 18.0998% ( 86) 00:10:46.284 9580.363 - 9633.002: 19.0314% ( 96) 00:10:46.284 9633.002 - 9685.642: 20.0408% ( 104) 00:10:46.284 9685.642 - 9738.281: 21.1374% ( 113) 00:10:46.284 9738.281 - 9790.920: 22.0691% ( 96) 00:10:46.284 9790.920 - 9843.560: 23.0299% ( 99) 00:10:46.284 9843.560 - 9896.199: 24.0974% ( 110) 00:10:46.284 9896.199 - 9948.839: 25.2232% ( 116) 00:10:46.284 9948.839 - 10001.478: 26.1258% ( 93) 00:10:46.284 10001.478 - 10054.117: 27.2321% ( 114) 00:10:46.284 10054.117 - 10106.757: 28.2318% ( 103) 00:10:46.284 10106.757 - 10159.396: 29.1246% ( 92) 00:10:46.284 10159.396 - 10212.035: 29.8622% ( 76) 00:10:46.284 10212.035 - 10264.675: 30.5318% ( 69) 00:10:46.284 10264.675 - 10317.314: 30.9783% ( 46) 00:10:46.284 10317.314 - 10369.953: 31.2985% ( 33) 00:10:46.284 10369.953 - 10422.593: 31.5606% ( 27) 00:10:46.284 10422.593 - 10475.232: 31.8420% ( 29) 00:10:46.284 10475.232 - 10527.871: 32.1429% ( 31) 00:10:46.284 10527.871 - 10580.511: 32.4243% ( 29) 00:10:46.284 10580.511 - 10633.150: 32.9193% ( 51) 00:10:46.284 10633.150 - 10685.790: 33.3463% ( 44) 00:10:46.284 10685.790 - 10738.429: 34.0936% ( 77) 00:10:46.284 10738.429 - 10791.068: 34.7050% ( 63) 00:10:46.284 10791.068 - 10843.708: 35.2096% ( 52) 00:10:46.284 10843.708 - 10896.347: 35.7531% ( 56) 00:10:46.284 10896.347 - 10948.986: 36.2286% ( 49) 00:10:46.284 10948.986 - 11001.626: 36.6654% ( 45) 00:10:46.284 11001.626 - 11054.265: 37.0924% ( 44) 00:10:46.284 11054.265 - 11106.904: 37.5388% ( 46) 00:10:46.284 11106.904 - 11159.544: 38.1114% ( 59) 00:10:46.284 11159.544 - 11212.183: 38.6840% ( 59) 00:10:46.284 11212.183 - 11264.822: 39.2275% ( 56) 00:10:46.284 11264.822 - 11317.462: 40.0427% ( 84) 00:10:46.284 11317.462 - 11370.101: 41.0035% ( 99) 00:10:46.284 11370.101 - 11422.741: 42.1390% ( 117) 00:10:46.284 11422.741 - 11475.380: 43.2259% ( 112) 00:10:46.284 11475.380 - 11528.019: 44.5846% ( 140) 00:10:46.285 11528.019 - 11580.659: 45.8075% ( 126) 00:10:46.285 11580.659 - 11633.298: 47.3020% ( 154) 00:10:46.285 11633.298 - 11685.937: 48.6413% ( 138) 00:10:46.285 11685.937 - 11738.577: 49.9612% ( 136) 00:10:46.285 11738.577 - 11791.216: 51.5528% ( 164) 00:10:46.285 11791.216 - 11843.855: 53.1056% ( 160) 00:10:46.285 11843.855 - 11896.495: 54.7554% ( 170) 00:10:46.285 11896.495 - 11949.134: 56.4053% ( 170) 00:10:46.285 11949.134 - 12001.773: 58.1036% ( 175) 00:10:46.285 12001.773 - 12054.413: 59.6273% ( 157) 00:10:46.285 12054.413 - 12107.052: 61.1510% ( 157) 00:10:46.285 12107.052 - 12159.692: 62.3835% ( 127) 00:10:46.285 12159.692 - 12212.331: 63.5870% ( 124) 00:10:46.285 12212.331 - 12264.970: 64.6545% ( 110) 00:10:46.285 12264.970 - 12317.610: 65.6444% ( 102) 00:10:46.285 12317.610 - 12370.249: 66.7605% ( 115) 00:10:46.285 12370.249 - 12422.888: 67.4689% ( 73) 00:10:46.285 12422.888 - 12475.528: 68.0512% ( 60) 00:10:46.285 12475.528 - 12528.167: 68.5559% ( 52) 00:10:46.285 12528.167 - 12580.806: 68.9926% ( 45) 00:10:46.285 12580.806 - 12633.446: 69.4488% ( 47) 00:10:46.285 12633.446 - 12686.085: 69.8661% ( 43) 00:10:46.285 12686.085 - 12738.724: 70.2057% ( 35) 00:10:46.285 12738.724 - 12791.364: 70.6134% ( 42) 00:10:46.285 12791.364 - 12844.003: 70.9918% ( 39) 00:10:46.285 12844.003 - 12896.643: 71.3218% ( 34) 00:10:46.285 12896.643 - 12949.282: 71.6518% ( 34) 00:10:46.285 12949.282 - 13001.921: 72.0982% ( 46) 00:10:46.285 13001.921 - 13054.561: 72.4088% ( 32) 00:10:46.285 13054.561 - 13107.200: 72.7679% ( 37) 00:10:46.285 13107.200 - 13159.839: 73.0881% ( 33) 00:10:46.285 13159.839 - 13212.479: 73.4084% ( 33) 00:10:46.285 13212.479 - 13265.118: 73.9130% ( 52) 00:10:46.285 13265.118 - 13317.757: 74.2139% ( 31) 00:10:46.285 13317.757 - 13370.397: 74.4662% ( 26) 00:10:46.285 13370.397 - 13423.036: 74.6894% ( 23) 00:10:46.285 13423.036 - 13475.676: 74.9515% ( 27) 00:10:46.285 13475.676 - 13580.954: 75.8055% ( 88) 00:10:46.285 13580.954 - 13686.233: 76.2519% ( 46) 00:10:46.285 13686.233 - 13791.512: 76.6887% ( 45) 00:10:46.285 13791.512 - 13896.790: 77.2516% ( 58) 00:10:46.285 13896.790 - 14002.069: 77.7174% ( 48) 00:10:46.285 14002.069 - 14107.348: 78.1929% ( 49) 00:10:46.285 14107.348 - 14212.627: 78.7364% ( 56) 00:10:46.285 14212.627 - 14317.905: 79.3478% ( 63) 00:10:46.285 14317.905 - 14423.184: 80.1048% ( 78) 00:10:46.285 14423.184 - 14528.463: 80.5707% ( 48) 00:10:46.285 14528.463 - 14633.741: 80.9297% ( 37) 00:10:46.285 14633.741 - 14739.020: 81.3373% ( 42) 00:10:46.285 14739.020 - 14844.299: 81.8905% ( 57) 00:10:46.285 14844.299 - 14949.578: 82.4340% ( 56) 00:10:46.285 14949.578 - 15054.856: 83.0551% ( 64) 00:10:46.285 15054.856 - 15160.135: 83.6180% ( 58) 00:10:46.285 15160.135 - 15265.414: 83.9674% ( 36) 00:10:46.285 15265.414 - 15370.692: 84.3944% ( 44) 00:10:46.285 15370.692 - 15475.971: 84.8408% ( 46) 00:10:46.285 15475.971 - 15581.250: 85.5687% ( 75) 00:10:46.285 15581.250 - 15686.529: 86.3160% ( 77) 00:10:46.285 15686.529 - 15791.807: 87.0050% ( 71) 00:10:46.285 15791.807 - 15897.086: 87.4903% ( 50) 00:10:46.285 15897.086 - 16002.365: 87.7814% ( 30) 00:10:46.285 16002.365 - 16107.643: 88.0629% ( 29) 00:10:46.285 16107.643 - 16212.922: 88.2182% ( 16) 00:10:46.285 16212.922 - 16318.201: 88.3637% ( 15) 00:10:46.285 16318.201 - 16423.480: 88.4220% ( 6) 00:10:46.285 16423.480 - 16528.758: 88.4802% ( 6) 00:10:46.285 16528.758 - 16634.037: 88.6549% ( 18) 00:10:46.285 16634.037 - 16739.316: 89.1304% ( 49) 00:10:46.285 16739.316 - 16844.594: 89.4895% ( 37) 00:10:46.285 16844.594 - 16949.873: 90.0039% ( 53) 00:10:46.285 16949.873 - 17055.152: 90.6153% ( 63) 00:10:46.285 17055.152 - 17160.431: 91.1685% ( 57) 00:10:46.285 17160.431 - 17265.709: 91.8090% ( 66) 00:10:46.285 17265.709 - 17370.988: 92.2748% ( 48) 00:10:46.285 17370.988 - 17476.267: 92.7019% ( 44) 00:10:46.285 17476.267 - 17581.545: 93.1095% ( 42) 00:10:46.285 17581.545 - 17686.824: 93.6335% ( 54) 00:10:46.285 17686.824 - 17792.103: 94.0120% ( 39) 00:10:46.285 17792.103 - 17897.382: 94.1576% ( 15) 00:10:46.285 17897.382 - 18002.660: 94.3032% ( 15) 00:10:46.285 18002.660 - 18107.939: 94.5943% ( 30) 00:10:46.285 18107.939 - 18213.218: 94.9534% ( 37) 00:10:46.285 18213.218 - 18318.496: 95.1960% ( 25) 00:10:46.285 18318.496 - 18423.775: 95.3804% ( 19) 00:10:46.285 18423.775 - 18529.054: 95.5454% ( 17) 00:10:46.285 18529.054 - 18634.333: 95.7104% ( 17) 00:10:46.285 18634.333 - 18739.611: 95.8657% ( 16) 00:10:46.285 18739.611 - 18844.890: 96.0113% ( 15) 00:10:46.285 18844.890 - 18950.169: 96.1665% ( 16) 00:10:46.285 18950.169 - 19055.447: 96.3898% ( 23) 00:10:46.285 19055.447 - 19160.726: 96.5062% ( 12) 00:10:46.285 19160.726 - 19266.005: 96.6227% ( 12) 00:10:46.285 19266.005 - 19371.284: 96.7003% ( 8) 00:10:46.285 19371.284 - 19476.562: 96.7488% ( 5) 00:10:46.285 19476.562 - 19581.841: 96.7780% ( 3) 00:10:46.285 19581.841 - 19687.120: 96.8265% ( 5) 00:10:46.285 19687.120 - 19792.398: 96.9332% ( 11) 00:10:46.285 19792.398 - 19897.677: 97.0885% ( 16) 00:10:46.285 19897.677 - 20002.956: 97.2438% ( 16) 00:10:46.285 20002.956 - 20108.235: 97.4088% ( 17) 00:10:46.285 20108.235 - 20213.513: 97.7290% ( 33) 00:10:46.285 20213.513 - 20318.792: 97.8067% ( 8) 00:10:46.285 20318.792 - 20424.071: 97.8746% ( 7) 00:10:46.285 20424.071 - 20529.349: 97.8843% ( 1) 00:10:46.285 20529.349 - 20634.628: 97.9231% ( 4) 00:10:46.285 20634.628 - 20739.907: 98.0299% ( 11) 00:10:46.285 20739.907 - 20845.186: 98.1075% ( 8) 00:10:46.285 20845.186 - 20950.464: 98.2046% ( 10) 00:10:46.285 20950.464 - 21055.743: 98.2822% ( 8) 00:10:46.285 21055.743 - 21161.022: 98.3502% ( 7) 00:10:46.285 21161.022 - 21266.300: 98.4181% ( 7) 00:10:46.285 21266.300 - 21371.579: 98.4472% ( 3) 00:10:46.285 21371.579 - 21476.858: 98.4860% ( 4) 00:10:46.285 21476.858 - 21582.137: 98.5248% ( 4) 00:10:46.285 21582.137 - 21687.415: 98.5637% ( 4) 00:10:46.285 21687.415 - 21792.694: 98.6025% ( 4) 00:10:46.285 21792.694 - 21897.973: 98.6413% ( 4) 00:10:46.285 21897.973 - 22003.251: 98.6801% ( 4) 00:10:46.285 22003.251 - 22108.530: 98.7189% ( 4) 00:10:46.285 22108.530 - 22213.809: 98.7578% ( 4) 00:10:46.285 31794.172 - 32004.729: 98.7675% ( 1) 00:10:46.285 32004.729 - 32215.287: 98.8354% ( 7) 00:10:46.285 32215.287 - 32425.844: 98.9033% ( 7) 00:10:46.285 32425.844 - 32636.402: 98.9810% ( 8) 00:10:46.285 32636.402 - 32846.959: 99.0489% ( 7) 00:10:46.285 32846.959 - 33057.516: 99.1168% ( 7) 00:10:46.285 33057.516 - 33268.074: 99.1848% ( 7) 00:10:46.285 33268.074 - 33478.631: 99.2624% ( 8) 00:10:46.285 33478.631 - 33689.189: 99.3304% ( 7) 00:10:46.285 33689.189 - 33899.746: 99.3789% ( 5) 00:10:46.285 42532.601 - 42743.158: 99.4080% ( 3) 00:10:46.285 42743.158 - 42953.716: 99.4565% ( 5) 00:10:46.285 42953.716 - 43164.273: 99.5050% ( 5) 00:10:46.285 43164.273 - 43374.831: 99.5439% ( 4) 00:10:46.285 43374.831 - 43585.388: 99.5924% ( 5) 00:10:46.285 43585.388 - 43795.945: 99.6409% ( 5) 00:10:46.285 43795.945 - 44006.503: 99.6894% ( 5) 00:10:46.285 44006.503 - 44217.060: 99.7380% ( 5) 00:10:46.285 44217.060 - 44427.618: 99.7865% ( 5) 00:10:46.285 44427.618 - 44638.175: 99.8447% ( 6) 00:10:46.285 44638.175 - 44848.733: 99.8835% ( 4) 00:10:46.285 44848.733 - 45059.290: 99.9321% ( 5) 00:10:46.285 45059.290 - 45269.847: 99.9709% ( 4) 00:10:46.285 45269.847 - 45480.405: 100.0000% ( 3) 00:10:46.285 00:10:46.285 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:46.285 ============================================================================== 00:10:46.285 Range in us Cumulative IO count 00:10:46.285 7158.953 - 7211.592: 0.0097% ( 1) 00:10:46.285 7422.149 - 7474.789: 0.0776% ( 7) 00:10:46.285 7474.789 - 7527.428: 0.1359% ( 6) 00:10:46.285 7527.428 - 7580.067: 0.2232% ( 9) 00:10:46.285 7580.067 - 7632.707: 0.3882% ( 17) 00:10:46.285 7632.707 - 7685.346: 0.4950% ( 11) 00:10:46.285 7685.346 - 7737.986: 0.5726% ( 8) 00:10:46.285 7737.986 - 7790.625: 0.6502% ( 8) 00:10:46.285 7790.625 - 7843.264: 0.6988% ( 5) 00:10:46.285 7843.264 - 7895.904: 0.8249% ( 13) 00:10:46.285 7895.904 - 7948.543: 0.9899% ( 17) 00:10:46.285 7948.543 - 8001.182: 1.3296% ( 35) 00:10:46.285 8001.182 - 8053.822: 1.5722% ( 25) 00:10:46.285 8053.822 - 8106.461: 1.9216% ( 36) 00:10:46.285 8106.461 - 8159.100: 2.2807% ( 37) 00:10:46.285 8159.100 - 8211.740: 2.5136% ( 24) 00:10:46.285 8211.740 - 8264.379: 2.8144% ( 31) 00:10:46.285 8264.379 - 8317.018: 3.0765% ( 27) 00:10:46.285 8317.018 - 8369.658: 3.3482% ( 28) 00:10:46.285 8369.658 - 8422.297: 3.6782% ( 34) 00:10:46.285 8422.297 - 8474.937: 4.0373% ( 37) 00:10:46.285 8474.937 - 8527.576: 4.3575% ( 33) 00:10:46.285 8527.576 - 8580.215: 4.6972% ( 35) 00:10:46.285 8580.215 - 8632.855: 5.0757% ( 39) 00:10:46.285 8632.855 - 8685.494: 5.3668% ( 30) 00:10:46.285 8685.494 - 8738.133: 5.7550% ( 40) 00:10:46.285 8738.133 - 8790.773: 5.9880% ( 24) 00:10:46.285 8790.773 - 8843.412: 6.2791% ( 30) 00:10:46.285 8843.412 - 8896.051: 6.7450% ( 48) 00:10:46.285 8896.051 - 8948.691: 7.4437% ( 72) 00:10:46.285 8948.691 - 9001.330: 8.2686% ( 85) 00:10:46.285 9001.330 - 9053.969: 9.0936% ( 85) 00:10:46.285 9053.969 - 9106.609: 10.0155% ( 95) 00:10:46.285 9106.609 - 9159.248: 10.6658% ( 67) 00:10:46.285 9159.248 - 9211.888: 11.6363% ( 100) 00:10:46.285 9211.888 - 9264.527: 12.4224% ( 81) 00:10:46.285 9264.527 - 9317.166: 13.3637% ( 97) 00:10:46.285 9317.166 - 9369.806: 14.2857% ( 95) 00:10:46.285 9369.806 - 9422.445: 15.1592% ( 90) 00:10:46.285 9422.445 - 9475.084: 16.3043% ( 118) 00:10:46.285 9475.084 - 9527.724: 17.3137% ( 104) 00:10:46.285 9527.724 - 9580.363: 18.3618% ( 108) 00:10:46.285 9580.363 - 9633.002: 19.2547% ( 92) 00:10:46.285 9633.002 - 9685.642: 20.3319% ( 111) 00:10:46.285 9685.642 - 9738.281: 21.8168% ( 153) 00:10:46.285 9738.281 - 9790.920: 22.7582% ( 97) 00:10:46.285 9790.920 - 9843.560: 23.7092% ( 98) 00:10:46.285 9843.560 - 9896.199: 24.5827% ( 90) 00:10:46.285 9896.199 - 9948.839: 25.5338% ( 98) 00:10:46.285 9948.839 - 10001.478: 26.4946% ( 99) 00:10:46.285 10001.478 - 10054.117: 27.4165% ( 95) 00:10:46.285 10054.117 - 10106.757: 28.6491% ( 127) 00:10:46.285 10106.757 - 10159.396: 29.4740% ( 85) 00:10:46.285 10159.396 - 10212.035: 30.0563% ( 60) 00:10:46.285 10212.035 - 10264.675: 30.6386% ( 60) 00:10:46.285 10264.675 - 10317.314: 31.1335% ( 51) 00:10:46.285 10317.314 - 10369.953: 31.6091% ( 49) 00:10:46.285 10369.953 - 10422.593: 32.2884% ( 70) 00:10:46.285 10422.593 - 10475.232: 32.9969% ( 73) 00:10:46.285 10475.232 - 10527.871: 33.3172% ( 33) 00:10:46.285 10527.871 - 10580.511: 33.6957% ( 39) 00:10:46.285 10580.511 - 10633.150: 34.1033% ( 42) 00:10:46.285 10633.150 - 10685.790: 34.5012% ( 41) 00:10:46.285 10685.790 - 10738.429: 34.7535% ( 26) 00:10:46.285 10738.429 - 10791.068: 35.1611% ( 42) 00:10:46.285 10791.068 - 10843.708: 35.5396% ( 39) 00:10:46.285 10843.708 - 10896.347: 35.7434% ( 21) 00:10:46.285 10896.347 - 10948.986: 36.0054% ( 27) 00:10:46.285 10948.986 - 11001.626: 36.4130% ( 42) 00:10:46.285 11001.626 - 11054.265: 36.7333% ( 33) 00:10:46.285 11054.265 - 11106.904: 37.0536% ( 33) 00:10:46.285 11106.904 - 11159.544: 37.4612% ( 42) 00:10:46.285 11159.544 - 11212.183: 37.9173% ( 47) 00:10:46.285 11212.183 - 11264.822: 38.5287% ( 63) 00:10:46.285 11264.822 - 11317.462: 39.0722% ( 56) 00:10:46.285 11317.462 - 11370.101: 40.0427% ( 100) 00:10:46.285 11370.101 - 11422.741: 40.9550% ( 94) 00:10:46.285 11422.741 - 11475.380: 42.0613% ( 114) 00:10:46.285 11475.380 - 11528.019: 43.3521% ( 133) 00:10:46.285 11528.019 - 11580.659: 44.7399% ( 143) 00:10:46.285 11580.659 - 11633.298: 46.0210% ( 132) 00:10:46.285 11633.298 - 11685.937: 47.1564% ( 117) 00:10:46.285 11685.937 - 11738.577: 48.5928% ( 148) 00:10:46.285 11738.577 - 11791.216: 50.1359% ( 159) 00:10:46.285 11791.216 - 11843.855: 51.6984% ( 161) 00:10:46.285 11843.855 - 11896.495: 53.3094% ( 166) 00:10:46.285 11896.495 - 11949.134: 55.3668% ( 212) 00:10:46.285 11949.134 - 12001.773: 57.0361% ( 172) 00:10:46.285 12001.773 - 12054.413: 58.5986% ( 161) 00:10:46.285 12054.413 - 12107.052: 59.9961% ( 144) 00:10:46.285 12107.052 - 12159.692: 61.6266% ( 168) 00:10:46.285 12159.692 - 12212.331: 62.8882% ( 130) 00:10:46.285 12212.331 - 12264.970: 63.8878% ( 103) 00:10:46.285 12264.970 - 12317.610: 65.1883% ( 134) 00:10:46.285 12317.610 - 12370.249: 66.1394% ( 98) 00:10:46.285 12370.249 - 12422.888: 67.1196% ( 101) 00:10:46.285 12422.888 - 12475.528: 67.7116% ( 61) 00:10:46.285 12475.528 - 12528.167: 68.4394% ( 75) 00:10:46.285 12528.167 - 12580.806: 68.9150% ( 49) 00:10:46.285 12580.806 - 12633.446: 69.4293% ( 53) 00:10:46.285 12633.446 - 12686.085: 69.6817% ( 26) 00:10:46.285 12686.085 - 12738.724: 69.9437% ( 27) 00:10:46.285 12738.724 - 12791.364: 70.2931% ( 36) 00:10:46.285 12791.364 - 12844.003: 70.5745% ( 29) 00:10:46.285 12844.003 - 12896.643: 70.8948% ( 33) 00:10:46.285 12896.643 - 12949.282: 71.3024% ( 42) 00:10:46.285 12949.282 - 13001.921: 71.7877% ( 50) 00:10:46.285 13001.921 - 13054.561: 72.0885% ( 31) 00:10:46.285 13054.561 - 13107.200: 72.4185% ( 34) 00:10:46.285 13107.200 - 13159.839: 72.8358% ( 43) 00:10:46.285 13159.839 - 13212.479: 73.2240% ( 40) 00:10:46.285 13212.479 - 13265.118: 73.5637% ( 35) 00:10:46.285 13265.118 - 13317.757: 74.0198% ( 47) 00:10:46.285 13317.757 - 13370.397: 74.4080% ( 40) 00:10:46.285 13370.397 - 13423.036: 74.7380% ( 34) 00:10:46.285 13423.036 - 13475.676: 75.0970% ( 37) 00:10:46.285 13475.676 - 13580.954: 75.7182% ( 64) 00:10:46.285 13580.954 - 13686.233: 76.5334% ( 84) 00:10:46.285 13686.233 - 13791.512: 77.4068% ( 90) 00:10:46.285 13791.512 - 13896.790: 78.1832% ( 80) 00:10:46.285 13896.790 - 14002.069: 78.9887% ( 83) 00:10:46.285 14002.069 - 14107.348: 79.7457% ( 78) 00:10:46.285 14107.348 - 14212.627: 80.1922% ( 46) 00:10:46.285 14212.627 - 14317.905: 80.6192% ( 44) 00:10:46.285 14317.905 - 14423.184: 81.2985% ( 70) 00:10:46.285 14423.184 - 14528.463: 81.8129% ( 53) 00:10:46.285 14528.463 - 14633.741: 82.4534% ( 66) 00:10:46.285 14633.741 - 14739.020: 83.0260% ( 59) 00:10:46.285 14739.020 - 14844.299: 83.4142% ( 40) 00:10:46.285 14844.299 - 14949.578: 83.6859% ( 28) 00:10:46.285 14949.578 - 15054.856: 84.2391% ( 57) 00:10:46.285 15054.856 - 15160.135: 84.4818% ( 25) 00:10:46.285 15160.135 - 15265.414: 84.6661% ( 19) 00:10:46.285 15265.414 - 15370.692: 84.8117% ( 15) 00:10:46.285 15370.692 - 15475.971: 85.1611% ( 36) 00:10:46.285 15475.971 - 15581.250: 85.5202% ( 37) 00:10:46.285 15581.250 - 15686.529: 85.9278% ( 42) 00:10:46.285 15686.529 - 15791.807: 86.1995% ( 28) 00:10:46.285 15791.807 - 15897.086: 86.4325% ( 24) 00:10:46.285 15897.086 - 16002.365: 86.7624% ( 34) 00:10:46.285 16002.365 - 16107.643: 87.1021% ( 35) 00:10:46.285 16107.643 - 16212.922: 87.4127% ( 32) 00:10:46.285 16212.922 - 16318.201: 87.9173% ( 52) 00:10:46.285 16318.201 - 16423.480: 88.2958% ( 39) 00:10:46.285 16423.480 - 16528.758: 88.7034% ( 42) 00:10:46.285 16528.758 - 16634.037: 89.1207% ( 43) 00:10:46.285 16634.037 - 16739.316: 89.4216% ( 31) 00:10:46.285 16739.316 - 16844.594: 89.8486% ( 44) 00:10:46.285 16844.594 - 16949.873: 90.2562% ( 42) 00:10:46.285 16949.873 - 17055.152: 90.8773% ( 64) 00:10:46.285 17055.152 - 17160.431: 91.4693% ( 61) 00:10:46.285 17160.431 - 17265.709: 91.8284% ( 37) 00:10:46.285 17265.709 - 17370.988: 92.2166% ( 40) 00:10:46.285 17370.988 - 17476.267: 92.6145% ( 41) 00:10:46.285 17476.267 - 17581.545: 92.8766% ( 27) 00:10:46.285 17581.545 - 17686.824: 93.1677% ( 30) 00:10:46.285 17686.824 - 17792.103: 93.5171% ( 36) 00:10:46.285 17792.103 - 17897.382: 93.8373% ( 33) 00:10:46.285 17897.382 - 18002.660: 94.1479% ( 32) 00:10:46.285 18002.660 - 18107.939: 94.6817% ( 55) 00:10:46.285 18107.939 - 18213.218: 95.1378% ( 47) 00:10:46.285 18213.218 - 18318.496: 95.4678% ( 34) 00:10:46.285 18318.496 - 18423.775: 95.7007% ( 24) 00:10:46.285 18423.775 - 18529.054: 95.9433% ( 25) 00:10:46.285 18529.054 - 18634.333: 96.1568% ( 22) 00:10:46.285 18634.333 - 18739.611: 96.2927% ( 14) 00:10:46.285 18739.611 - 18844.890: 96.4092% ( 12) 00:10:46.285 18844.890 - 18950.169: 96.5062% ( 10) 00:10:46.285 18950.169 - 19055.447: 96.6033% ( 10) 00:10:46.285 19055.447 - 19160.726: 96.7100% ( 11) 00:10:46.285 19160.726 - 19266.005: 96.8168% ( 11) 00:10:46.285 19266.005 - 19371.284: 96.8750% ( 6) 00:10:46.285 19371.284 - 19476.562: 96.8944% ( 2) 00:10:46.285 19687.120 - 19792.398: 96.9526% ( 6) 00:10:46.285 19792.398 - 19897.677: 97.1176% ( 17) 00:10:46.285 19897.677 - 20002.956: 97.2632% ( 15) 00:10:46.285 20002.956 - 20108.235: 97.3991% ( 14) 00:10:46.285 20108.235 - 20213.513: 97.5835% ( 19) 00:10:46.285 20213.513 - 20318.792: 97.7679% ( 19) 00:10:46.285 20318.792 - 20424.071: 97.8649% ( 10) 00:10:46.285 20424.071 - 20529.349: 97.9911% ( 13) 00:10:46.285 20529.349 - 20634.628: 98.1852% ( 20) 00:10:46.285 20634.628 - 20739.907: 98.3307% ( 15) 00:10:46.285 20739.907 - 20845.186: 98.4472% ( 12) 00:10:46.285 20845.186 - 20950.464: 98.5151% ( 7) 00:10:46.285 20950.464 - 21055.743: 98.5734% ( 6) 00:10:46.285 21055.743 - 21161.022: 98.6607% ( 9) 00:10:46.285 21161.022 - 21266.300: 98.7189% ( 6) 00:10:46.285 21266.300 - 21371.579: 98.7578% ( 4) 00:10:46.285 31162.500 - 31373.057: 98.8160% ( 6) 00:10:46.285 31373.057 - 31583.614: 98.8936% ( 8) 00:10:46.285 31583.614 - 31794.172: 98.9616% ( 7) 00:10:46.285 31794.172 - 32004.729: 99.0392% ( 8) 00:10:46.285 32004.729 - 32215.287: 99.1071% ( 7) 00:10:46.285 32215.287 - 32425.844: 99.1848% ( 8) 00:10:46.285 32425.844 - 32636.402: 99.2527% ( 7) 00:10:46.285 32636.402 - 32846.959: 99.3207% ( 7) 00:10:46.285 32846.959 - 33057.516: 99.3789% ( 6) 00:10:46.285 41479.814 - 41690.371: 99.3886% ( 1) 00:10:46.285 41690.371 - 41900.929: 99.4468% ( 6) 00:10:46.285 41900.929 - 42111.486: 99.4856% ( 4) 00:10:46.285 42111.486 - 42322.043: 99.5342% ( 5) 00:10:46.285 42322.043 - 42532.601: 99.5827% ( 5) 00:10:46.285 42532.601 - 42743.158: 99.6312% ( 5) 00:10:46.285 42743.158 - 42953.716: 99.6797% ( 5) 00:10:46.285 42953.716 - 43164.273: 99.7186% ( 4) 00:10:46.285 43164.273 - 43374.831: 99.7768% ( 6) 00:10:46.285 43374.831 - 43585.388: 99.8253% ( 5) 00:10:46.285 43585.388 - 43795.945: 99.8738% ( 5) 00:10:46.285 43795.945 - 44006.503: 99.9321% ( 6) 00:10:46.285 44006.503 - 44217.060: 99.9806% ( 5) 00:10:46.285 44217.060 - 44427.618: 100.0000% ( 2) 00:10:46.285 00:10:46.285 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:46.286 ============================================================================== 00:10:46.286 Range in us Cumulative IO count 00:10:46.286 7369.510 - 7422.149: 0.0097% ( 1) 00:10:46.286 7422.149 - 7474.789: 0.0194% ( 1) 00:10:46.286 7580.067 - 7632.707: 0.0679% ( 5) 00:10:46.286 7632.707 - 7685.346: 0.1068% ( 4) 00:10:46.286 7685.346 - 7737.986: 0.1747% ( 7) 00:10:46.286 7737.986 - 7790.625: 0.2717% ( 10) 00:10:46.286 7790.625 - 7843.264: 0.5241% ( 26) 00:10:46.286 7843.264 - 7895.904: 0.7667% ( 25) 00:10:46.286 7895.904 - 7948.543: 1.1743% ( 42) 00:10:46.286 7948.543 - 8001.182: 1.6013% ( 44) 00:10:46.286 8001.182 - 8053.822: 1.8634% ( 27) 00:10:46.286 8053.822 - 8106.461: 2.2224% ( 37) 00:10:46.286 8106.461 - 8159.100: 2.6495% ( 44) 00:10:46.286 8159.100 - 8211.740: 2.9891% ( 35) 00:10:46.286 8211.740 - 8264.379: 3.5811% ( 61) 00:10:46.286 8264.379 - 8317.018: 3.9305% ( 36) 00:10:46.286 8317.018 - 8369.658: 4.2508% ( 33) 00:10:46.286 8369.658 - 8422.297: 4.5419% ( 30) 00:10:46.286 8422.297 - 8474.937: 4.8234% ( 29) 00:10:46.286 8474.937 - 8527.576: 4.9592% ( 14) 00:10:46.286 8527.576 - 8580.215: 5.1145% ( 16) 00:10:46.286 8580.215 - 8632.855: 5.2407% ( 13) 00:10:46.286 8632.855 - 8685.494: 5.4833% ( 25) 00:10:46.286 8685.494 - 8738.133: 5.6192% ( 14) 00:10:46.286 8738.133 - 8790.773: 5.7939% ( 18) 00:10:46.286 8790.773 - 8843.412: 6.0365% ( 25) 00:10:46.286 8843.412 - 8896.051: 6.2112% ( 18) 00:10:46.286 8896.051 - 8948.691: 6.5217% ( 32) 00:10:46.286 8948.691 - 9001.330: 7.0361% ( 53) 00:10:46.286 9001.330 - 9053.969: 7.7155% ( 70) 00:10:46.286 9053.969 - 9106.609: 8.7733% ( 109) 00:10:46.286 9106.609 - 9159.248: 9.8700% ( 113) 00:10:46.286 9159.248 - 9211.888: 10.7434% ( 90) 00:10:46.286 9211.888 - 9264.527: 12.0730% ( 137) 00:10:46.286 9264.527 - 9317.166: 13.3734% ( 134) 00:10:46.286 9317.166 - 9369.806: 14.7613% ( 143) 00:10:46.286 9369.806 - 9422.445: 15.7900% ( 106) 00:10:46.286 9422.445 - 9475.084: 16.9546% ( 120) 00:10:46.286 9475.084 - 9527.724: 17.9251% ( 100) 00:10:46.286 9527.724 - 9580.363: 18.7306% ( 83) 00:10:46.286 9580.363 - 9633.002: 19.4488% ( 74) 00:10:46.286 9633.002 - 9685.642: 20.2737% ( 85) 00:10:46.286 9685.642 - 9738.281: 21.1859% ( 94) 00:10:46.286 9738.281 - 9790.920: 22.2438% ( 109) 00:10:46.286 9790.920 - 9843.560: 23.3696% ( 116) 00:10:46.286 9843.560 - 9896.199: 24.4759% ( 114) 00:10:46.286 9896.199 - 9948.839: 25.6696% ( 123) 00:10:46.286 9948.839 - 10001.478: 26.8342% ( 120) 00:10:46.286 10001.478 - 10054.117: 27.9891% ( 119) 00:10:46.286 10054.117 - 10106.757: 29.0082% ( 105) 00:10:46.286 10106.757 - 10159.396: 29.7069% ( 72) 00:10:46.286 10159.396 - 10212.035: 30.2504% ( 56) 00:10:46.286 10212.035 - 10264.675: 30.7065% ( 47) 00:10:46.286 10264.675 - 10317.314: 31.2209% ( 53) 00:10:46.286 10317.314 - 10369.953: 31.8226% ( 62) 00:10:46.286 10369.953 - 10422.593: 32.2496% ( 44) 00:10:46.286 10422.593 - 10475.232: 32.5990% ( 36) 00:10:46.286 10475.232 - 10527.871: 33.0454% ( 46) 00:10:46.286 10527.871 - 10580.511: 33.3463% ( 31) 00:10:46.286 10580.511 - 10633.150: 33.8024% ( 47) 00:10:46.286 10633.150 - 10685.790: 34.0839% ( 29) 00:10:46.286 10685.790 - 10738.429: 34.3265% ( 25) 00:10:46.286 10738.429 - 10791.068: 34.5497% ( 23) 00:10:46.286 10791.068 - 10843.708: 34.7826% ( 24) 00:10:46.286 10843.708 - 10896.347: 35.1999% ( 43) 00:10:46.286 10896.347 - 10948.986: 35.5881% ( 40) 00:10:46.286 10948.986 - 11001.626: 36.1510% ( 58) 00:10:46.286 11001.626 - 11054.265: 36.6266% ( 49) 00:10:46.286 11054.265 - 11106.904: 37.1021% ( 49) 00:10:46.286 11106.904 - 11159.544: 37.6650% ( 58) 00:10:46.286 11159.544 - 11212.183: 38.0532% ( 40) 00:10:46.286 11212.183 - 11264.822: 38.6355% ( 60) 00:10:46.286 11264.822 - 11317.462: 39.4022% ( 79) 00:10:46.286 11317.462 - 11370.101: 40.5182% ( 115) 00:10:46.286 11370.101 - 11422.741: 41.4305% ( 94) 00:10:46.286 11422.741 - 11475.380: 42.6048% ( 121) 00:10:46.286 11475.380 - 11528.019: 43.8082% ( 124) 00:10:46.286 11528.019 - 11580.659: 45.0796% ( 131) 00:10:46.286 11580.659 - 11633.298: 46.5256% ( 149) 00:10:46.286 11633.298 - 11685.937: 47.9814% ( 150) 00:10:46.286 11685.937 - 11738.577: 49.4759% ( 154) 00:10:46.286 11738.577 - 11791.216: 51.1452% ( 172) 00:10:46.286 11791.216 - 11843.855: 52.7756% ( 168) 00:10:46.286 11843.855 - 11896.495: 54.3090% ( 158) 00:10:46.286 11896.495 - 11949.134: 55.6095% ( 134) 00:10:46.286 11949.134 - 12001.773: 57.0749% ( 151) 00:10:46.286 12001.773 - 12054.413: 58.6762% ( 165) 00:10:46.286 12054.413 - 12107.052: 59.8797% ( 124) 00:10:46.286 12107.052 - 12159.692: 61.0637% ( 122) 00:10:46.286 12159.692 - 12212.331: 62.4418% ( 142) 00:10:46.286 12212.331 - 12264.970: 63.5093% ( 110) 00:10:46.286 12264.970 - 12317.610: 64.4701% ( 99) 00:10:46.286 12317.610 - 12370.249: 65.3047% ( 86) 00:10:46.286 12370.249 - 12422.888: 65.9744% ( 69) 00:10:46.286 12422.888 - 12475.528: 66.5858% ( 63) 00:10:46.286 12475.528 - 12528.167: 67.3331% ( 77) 00:10:46.286 12528.167 - 12580.806: 67.8474% ( 53) 00:10:46.286 12580.806 - 12633.446: 68.2162% ( 38) 00:10:46.286 12633.446 - 12686.085: 68.5850% ( 38) 00:10:46.286 12686.085 - 12738.724: 68.8956% ( 32) 00:10:46.286 12738.724 - 12791.364: 69.2352% ( 35) 00:10:46.286 12791.364 - 12844.003: 69.6817% ( 46) 00:10:46.286 12844.003 - 12896.643: 70.0990% ( 43) 00:10:46.286 12896.643 - 12949.282: 70.6328% ( 55) 00:10:46.286 12949.282 - 13001.921: 71.0016% ( 38) 00:10:46.286 13001.921 - 13054.561: 71.4189% ( 43) 00:10:46.286 13054.561 - 13107.200: 71.8847% ( 48) 00:10:46.286 13107.200 - 13159.839: 72.3311% ( 46) 00:10:46.286 13159.839 - 13212.479: 72.7387% ( 42) 00:10:46.286 13212.479 - 13265.118: 73.3307% ( 61) 00:10:46.286 13265.118 - 13317.757: 74.0974% ( 79) 00:10:46.286 13317.757 - 13370.397: 74.6409% ( 56) 00:10:46.286 13370.397 - 13423.036: 75.4464% ( 83) 00:10:46.286 13423.036 - 13475.676: 76.1258% ( 70) 00:10:46.286 13475.676 - 13580.954: 76.8245% ( 72) 00:10:46.286 13580.954 - 13686.233: 77.4359% ( 63) 00:10:46.286 13686.233 - 13791.512: 78.0182% ( 60) 00:10:46.286 13791.512 - 13896.790: 78.3967% ( 39) 00:10:46.286 13896.790 - 14002.069: 78.6879% ( 30) 00:10:46.286 14002.069 - 14107.348: 79.0179% ( 34) 00:10:46.286 14107.348 - 14212.627: 79.4643% ( 46) 00:10:46.286 14212.627 - 14317.905: 79.9689% ( 52) 00:10:46.286 14317.905 - 14423.184: 80.3571% ( 40) 00:10:46.286 14423.184 - 14528.463: 80.8812% ( 54) 00:10:46.286 14528.463 - 14633.741: 81.4441% ( 58) 00:10:46.286 14633.741 - 14739.020: 82.2787% ( 86) 00:10:46.286 14739.020 - 14844.299: 82.9678% ( 71) 00:10:46.286 14844.299 - 14949.578: 83.6277% ( 68) 00:10:46.286 14949.578 - 15054.856: 84.2488% ( 64) 00:10:46.286 15054.856 - 15160.135: 84.6953% ( 46) 00:10:46.286 15160.135 - 15265.414: 85.0738% ( 39) 00:10:46.286 15265.414 - 15370.692: 85.5105% ( 45) 00:10:46.286 15370.692 - 15475.971: 85.7628% ( 26) 00:10:46.286 15475.971 - 15581.250: 86.0248% ( 27) 00:10:46.286 15581.250 - 15686.529: 86.5295% ( 52) 00:10:46.286 15686.529 - 15791.807: 86.9080% ( 39) 00:10:46.286 15791.807 - 15897.086: 87.2283% ( 33) 00:10:46.286 15897.086 - 16002.365: 87.5485% ( 33) 00:10:46.286 16002.365 - 16107.643: 87.7814% ( 24) 00:10:46.286 16107.643 - 16212.922: 88.0241% ( 25) 00:10:46.286 16212.922 - 16318.201: 88.3346% ( 32) 00:10:46.286 16318.201 - 16423.480: 88.8878% ( 57) 00:10:46.286 16423.480 - 16528.758: 89.4992% ( 63) 00:10:46.286 16528.758 - 16634.037: 90.0233% ( 54) 00:10:46.286 16634.037 - 16739.316: 90.3533% ( 34) 00:10:46.286 16739.316 - 16844.594: 90.6444% ( 30) 00:10:46.286 16844.594 - 16949.873: 90.9064% ( 27) 00:10:46.286 16949.873 - 17055.152: 91.1588% ( 26) 00:10:46.286 17055.152 - 17160.431: 91.5276% ( 38) 00:10:46.286 17160.431 - 17265.709: 91.9352% ( 42) 00:10:46.286 17265.709 - 17370.988: 92.2943% ( 37) 00:10:46.286 17370.988 - 17476.267: 92.8863% ( 61) 00:10:46.286 17476.267 - 17581.545: 93.2550% ( 38) 00:10:46.286 17581.545 - 17686.824: 93.6530% ( 41) 00:10:46.286 17686.824 - 17792.103: 93.8956% ( 25) 00:10:46.286 17792.103 - 17897.382: 94.1479% ( 26) 00:10:46.286 17897.382 - 18002.660: 94.3420% ( 20) 00:10:46.286 18002.660 - 18107.939: 94.5361% ( 20) 00:10:46.286 18107.939 - 18213.218: 94.7011% ( 17) 00:10:46.286 18213.218 - 18318.496: 94.8952% ( 20) 00:10:46.286 18318.496 - 18423.775: 95.1378% ( 25) 00:10:46.286 18423.775 - 18529.054: 95.3610% ( 23) 00:10:46.286 18529.054 - 18634.333: 95.6231% ( 27) 00:10:46.286 18634.333 - 18739.611: 96.2248% ( 62) 00:10:46.286 18739.611 - 18844.890: 96.5062% ( 29) 00:10:46.286 18844.890 - 18950.169: 96.6324% ( 13) 00:10:46.286 18950.169 - 19055.447: 96.8168% ( 19) 00:10:46.286 19055.447 - 19160.726: 96.9623% ( 15) 00:10:46.286 19160.726 - 19266.005: 97.1273% ( 17) 00:10:46.286 19266.005 - 19371.284: 97.1856% ( 6) 00:10:46.286 19371.284 - 19476.562: 97.2729% ( 9) 00:10:46.286 19476.562 - 19581.841: 97.3991% ( 13) 00:10:46.286 19581.841 - 19687.120: 97.5446% ( 15) 00:10:46.286 19687.120 - 19792.398: 97.6708% ( 13) 00:10:46.286 19792.398 - 19897.677: 97.7484% ( 8) 00:10:46.286 19897.677 - 20002.956: 97.8261% ( 8) 00:10:46.286 20002.956 - 20108.235: 97.8746% ( 5) 00:10:46.286 20108.235 - 20213.513: 97.9037% ( 3) 00:10:46.286 20213.513 - 20318.792: 97.9328% ( 3) 00:10:46.286 20318.792 - 20424.071: 97.9620% ( 3) 00:10:46.286 20424.071 - 20529.349: 98.0105% ( 5) 00:10:46.286 20529.349 - 20634.628: 98.1075% ( 10) 00:10:46.286 20634.628 - 20739.907: 98.2240% ( 12) 00:10:46.286 20739.907 - 20845.186: 98.3210% ( 10) 00:10:46.286 20845.186 - 20950.464: 98.4569% ( 14) 00:10:46.286 20950.464 - 21055.743: 98.5637% ( 11) 00:10:46.286 21055.743 - 21161.022: 98.6413% ( 8) 00:10:46.286 21161.022 - 21266.300: 98.6801% ( 4) 00:10:46.286 21266.300 - 21371.579: 98.7189% ( 4) 00:10:46.286 21371.579 - 21476.858: 98.7481% ( 3) 00:10:46.286 21476.858 - 21582.137: 98.7578% ( 1) 00:10:46.286 29688.598 - 29899.155: 98.7675% ( 1) 00:10:46.286 29899.155 - 30109.712: 98.8063% ( 4) 00:10:46.286 30109.712 - 30320.270: 98.8548% ( 5) 00:10:46.286 30320.270 - 30530.827: 98.8936% ( 4) 00:10:46.286 30530.827 - 30741.385: 98.9422% ( 5) 00:10:46.286 30741.385 - 30951.942: 98.9907% ( 5) 00:10:46.286 30951.942 - 31162.500: 99.0392% ( 5) 00:10:46.286 31162.500 - 31373.057: 99.0877% ( 5) 00:10:46.286 31373.057 - 31583.614: 99.1363% ( 5) 00:10:46.286 31583.614 - 31794.172: 99.1945% ( 6) 00:10:46.286 31794.172 - 32004.729: 99.2430% ( 5) 00:10:46.286 32004.729 - 32215.287: 99.2818% ( 4) 00:10:46.286 32215.287 - 32425.844: 99.3401% ( 6) 00:10:46.286 32425.844 - 32636.402: 99.3789% ( 4) 00:10:46.286 40005.912 - 40216.469: 99.3886% ( 1) 00:10:46.286 40216.469 - 40427.027: 99.4468% ( 6) 00:10:46.286 40427.027 - 40637.584: 99.4953% ( 5) 00:10:46.286 40637.584 - 40848.141: 99.5439% ( 5) 00:10:46.286 40848.141 - 41058.699: 99.5924% ( 5) 00:10:46.286 41058.699 - 41269.256: 99.6312% ( 4) 00:10:46.286 41269.256 - 41479.814: 99.6797% ( 5) 00:10:46.286 41479.814 - 41690.371: 99.7380% ( 6) 00:10:46.286 41690.371 - 41900.929: 99.7865% ( 5) 00:10:46.286 41900.929 - 42111.486: 99.8350% ( 5) 00:10:46.286 42111.486 - 42322.043: 99.8932% ( 6) 00:10:46.286 42322.043 - 42532.601: 99.9418% ( 5) 00:10:46.286 42532.601 - 42743.158: 99.9903% ( 5) 00:10:46.286 42743.158 - 42953.716: 100.0000% ( 1) 00:10:46.286 00:10:46.286 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:46.286 ============================================================================== 00:10:46.286 Range in us Cumulative IO count 00:10:46.286 7580.067 - 7632.707: 0.1359% ( 14) 00:10:46.286 7632.707 - 7685.346: 0.3106% ( 18) 00:10:46.286 7685.346 - 7737.986: 0.5144% ( 21) 00:10:46.286 7737.986 - 7790.625: 0.8832% ( 38) 00:10:46.286 7790.625 - 7843.264: 1.0287% ( 15) 00:10:46.286 7843.264 - 7895.904: 1.1840% ( 16) 00:10:46.286 7895.904 - 7948.543: 1.2616% ( 8) 00:10:46.286 7948.543 - 8001.182: 1.2908% ( 3) 00:10:46.286 8001.182 - 8053.822: 1.3684% ( 8) 00:10:46.286 8053.822 - 8106.461: 1.5431% ( 18) 00:10:46.286 8106.461 - 8159.100: 1.9022% ( 37) 00:10:46.286 8159.100 - 8211.740: 2.3971% ( 51) 00:10:46.286 8211.740 - 8264.379: 2.9891% ( 61) 00:10:46.286 8264.379 - 8317.018: 3.6297% ( 66) 00:10:46.286 8317.018 - 8369.658: 4.0373% ( 42) 00:10:46.286 8369.658 - 8422.297: 4.4837% ( 46) 00:10:46.286 8422.297 - 8474.937: 4.7554% ( 28) 00:10:46.286 8474.937 - 8527.576: 4.8816% ( 13) 00:10:46.286 8527.576 - 8580.215: 5.0951% ( 22) 00:10:46.286 8580.215 - 8632.855: 5.3086% ( 22) 00:10:46.286 8632.855 - 8685.494: 5.5318% ( 23) 00:10:46.286 8685.494 - 8738.133: 5.9006% ( 38) 00:10:46.286 8738.133 - 8790.773: 6.1335% ( 24) 00:10:46.286 8790.773 - 8843.412: 6.5023% ( 38) 00:10:46.286 8843.412 - 8896.051: 6.9196% ( 43) 00:10:46.286 8896.051 - 8948.691: 7.2496% ( 34) 00:10:46.286 8948.691 - 9001.330: 7.8319% ( 60) 00:10:46.286 9001.330 - 9053.969: 8.3269% ( 51) 00:10:46.286 9053.969 - 9106.609: 9.2100% ( 91) 00:10:46.286 9106.609 - 9159.248: 10.2484% ( 107) 00:10:46.286 9159.248 - 9211.888: 10.8210% ( 59) 00:10:46.286 9211.888 - 9264.527: 11.5198% ( 72) 00:10:46.286 9264.527 - 9317.166: 12.0924% ( 59) 00:10:46.286 9317.166 - 9369.806: 12.9270% ( 86) 00:10:46.286 9369.806 - 9422.445: 14.1110% ( 122) 00:10:46.286 9422.445 - 9475.084: 15.3047% ( 123) 00:10:46.286 9475.084 - 9527.724: 16.6440% ( 138) 00:10:46.286 9527.724 - 9580.363: 17.5369% ( 92) 00:10:46.286 9580.363 - 9633.002: 18.6432% ( 114) 00:10:46.286 9633.002 - 9685.642: 19.7399% ( 113) 00:10:46.286 9685.642 - 9738.281: 20.7977% ( 109) 00:10:46.286 9738.281 - 9790.920: 21.8944% ( 113) 00:10:46.286 9790.920 - 9843.560: 23.4181% ( 157) 00:10:46.286 9843.560 - 9896.199: 24.6312% ( 125) 00:10:46.286 9896.199 - 9948.839: 25.9026% ( 131) 00:10:46.286 9948.839 - 10001.478: 27.2613% ( 140) 00:10:46.286 10001.478 - 10054.117: 28.5520% ( 133) 00:10:46.286 10054.117 - 10106.757: 29.6487% ( 113) 00:10:46.286 10106.757 - 10159.396: 30.4930% ( 87) 00:10:46.286 10159.396 - 10212.035: 31.1141% ( 64) 00:10:46.286 10212.035 - 10264.675: 31.5897% ( 49) 00:10:46.286 10264.675 - 10317.314: 32.2690% ( 70) 00:10:46.286 10317.314 - 10369.953: 32.7252% ( 47) 00:10:46.286 10369.953 - 10422.593: 33.1134% ( 40) 00:10:46.286 10422.593 - 10475.232: 33.4530% ( 35) 00:10:46.286 10475.232 - 10527.871: 33.7539% ( 31) 00:10:46.286 10527.871 - 10580.511: 33.8995% ( 15) 00:10:46.286 10580.511 - 10633.150: 34.0450% ( 15) 00:10:46.286 10633.150 - 10685.790: 34.2003% ( 16) 00:10:46.286 10685.790 - 10738.429: 34.3847% ( 19) 00:10:46.286 10738.429 - 10791.068: 34.5594% ( 18) 00:10:46.286 10791.068 - 10843.708: 34.8602% ( 31) 00:10:46.286 10843.708 - 10896.347: 35.2387% ( 39) 00:10:46.286 10896.347 - 10948.986: 35.6852% ( 46) 00:10:46.286 10948.986 - 11001.626: 36.0151% ( 34) 00:10:46.286 11001.626 - 11054.265: 36.2966% ( 29) 00:10:46.286 11054.265 - 11106.904: 36.6071% ( 32) 00:10:46.286 11106.904 - 11159.544: 36.9468% ( 35) 00:10:46.286 11159.544 - 11212.183: 37.3544% ( 42) 00:10:46.286 11212.183 - 11264.822: 37.9464% ( 61) 00:10:46.286 11264.822 - 11317.462: 38.6549% ( 73) 00:10:46.286 11317.462 - 11370.101: 39.4119% ( 78) 00:10:46.286 11370.101 - 11422.741: 40.4309% ( 105) 00:10:46.286 11422.741 - 11475.380: 41.3626% ( 96) 00:10:46.286 11475.380 - 11528.019: 42.7310% ( 141) 00:10:46.286 11528.019 - 11580.659: 44.2547% ( 157) 00:10:46.287 11580.659 - 11633.298: 45.8463% ( 164) 00:10:46.287 11633.298 - 11685.937: 47.5155% ( 172) 00:10:46.287 11685.937 - 11738.577: 49.4662% ( 201) 00:10:46.287 11738.577 - 11791.216: 51.1161% ( 170) 00:10:46.287 11791.216 - 11843.855: 53.2220% ( 217) 00:10:46.287 11843.855 - 11896.495: 54.9786% ( 181) 00:10:46.287 11896.495 - 11949.134: 56.7547% ( 183) 00:10:46.287 11949.134 - 12001.773: 58.3463% ( 164) 00:10:46.287 12001.773 - 12054.413: 59.7438% ( 144) 00:10:46.287 12054.413 - 12107.052: 61.0540% ( 135) 00:10:46.287 12107.052 - 12159.692: 62.3447% ( 133) 00:10:46.287 12159.692 - 12212.331: 63.3249% ( 101) 00:10:46.287 12212.331 - 12264.970: 64.2760% ( 98) 00:10:46.287 12264.970 - 12317.610: 65.1300% ( 88) 00:10:46.287 12317.610 - 12370.249: 65.6929% ( 58) 00:10:46.287 12370.249 - 12422.888: 66.2752% ( 60) 00:10:46.287 12422.888 - 12475.528: 66.7023% ( 44) 00:10:46.287 12475.528 - 12528.167: 67.1584% ( 47) 00:10:46.287 12528.167 - 12580.806: 67.6533% ( 51) 00:10:46.287 12580.806 - 12633.446: 68.1871% ( 55) 00:10:46.287 12633.446 - 12686.085: 68.7112% ( 54) 00:10:46.287 12686.085 - 12738.724: 69.1382% ( 44) 00:10:46.287 12738.724 - 12791.364: 69.7496% ( 63) 00:10:46.287 12791.364 - 12844.003: 70.1184% ( 38) 00:10:46.287 12844.003 - 12896.643: 70.4581% ( 35) 00:10:46.287 12896.643 - 12949.282: 70.8172% ( 37) 00:10:46.287 12949.282 - 13001.921: 71.4189% ( 62) 00:10:46.287 13001.921 - 13054.561: 72.0594% ( 66) 00:10:46.287 13054.561 - 13107.200: 72.5058% ( 46) 00:10:46.287 13107.200 - 13159.839: 72.7776% ( 28) 00:10:46.287 13159.839 - 13212.479: 73.0590% ( 29) 00:10:46.287 13212.479 - 13265.118: 73.3599% ( 31) 00:10:46.287 13265.118 - 13317.757: 73.6704% ( 32) 00:10:46.287 13317.757 - 13370.397: 74.1460% ( 49) 00:10:46.287 13370.397 - 13423.036: 74.6021% ( 47) 00:10:46.287 13423.036 - 13475.676: 74.9224% ( 33) 00:10:46.287 13475.676 - 13580.954: 75.7667% ( 87) 00:10:46.287 13580.954 - 13686.233: 76.5237% ( 78) 00:10:46.287 13686.233 - 13791.512: 77.1351% ( 63) 00:10:46.287 13791.512 - 13896.790: 77.6495% ( 53) 00:10:46.287 13896.790 - 14002.069: 78.0959% ( 46) 00:10:46.287 14002.069 - 14107.348: 78.7073% ( 63) 00:10:46.287 14107.348 - 14212.627: 79.2120% ( 52) 00:10:46.287 14212.627 - 14317.905: 79.7360% ( 54) 00:10:46.287 14317.905 - 14423.184: 80.0951% ( 37) 00:10:46.287 14423.184 - 14528.463: 80.5901% ( 51) 00:10:46.287 14528.463 - 14633.741: 81.2015% ( 63) 00:10:46.287 14633.741 - 14739.020: 81.8129% ( 63) 00:10:46.287 14739.020 - 14844.299: 82.1817% ( 38) 00:10:46.287 14844.299 - 14949.578: 82.5116% ( 34) 00:10:46.287 14949.578 - 15054.856: 83.0454% ( 55) 00:10:46.287 15054.856 - 15160.135: 83.9286% ( 91) 00:10:46.287 15160.135 - 15265.414: 84.8700% ( 97) 00:10:46.287 15265.414 - 15370.692: 85.7919% ( 95) 00:10:46.287 15370.692 - 15475.971: 86.6557% ( 89) 00:10:46.287 15475.971 - 15581.250: 87.2283% ( 59) 00:10:46.287 15581.250 - 15686.529: 87.7038% ( 49) 00:10:46.287 15686.529 - 15791.807: 88.1017% ( 41) 00:10:46.287 15791.807 - 15897.086: 88.4414% ( 35) 00:10:46.287 15897.086 - 16002.365: 88.7325% ( 30) 00:10:46.287 16002.365 - 16107.643: 88.9849% ( 26) 00:10:46.287 16107.643 - 16212.922: 89.2663% ( 29) 00:10:46.287 16212.922 - 16318.201: 89.5283% ( 27) 00:10:46.287 16318.201 - 16423.480: 89.8292% ( 31) 00:10:46.287 16423.480 - 16528.758: 90.1689% ( 35) 00:10:46.287 16528.758 - 16634.037: 90.6735% ( 52) 00:10:46.287 16634.037 - 16739.316: 90.9841% ( 32) 00:10:46.287 16739.316 - 16844.594: 91.4014% ( 43) 00:10:46.287 16844.594 - 16949.873: 91.6343% ( 24) 00:10:46.287 16949.873 - 17055.152: 91.8187% ( 19) 00:10:46.287 17055.152 - 17160.431: 91.9643% ( 15) 00:10:46.287 17160.431 - 17265.709: 92.1681% ( 21) 00:10:46.287 17265.709 - 17370.988: 92.4398% ( 28) 00:10:46.287 17370.988 - 17476.267: 92.7310% ( 30) 00:10:46.287 17476.267 - 17581.545: 92.9542% ( 23) 00:10:46.287 17581.545 - 17686.824: 93.3812% ( 44) 00:10:46.287 17686.824 - 17792.103: 93.5268% ( 15) 00:10:46.287 17792.103 - 17897.382: 93.6724% ( 15) 00:10:46.287 17897.382 - 18002.660: 93.7888% ( 12) 00:10:46.287 18002.660 - 18107.939: 93.9635% ( 18) 00:10:46.287 18107.939 - 18213.218: 94.3614% ( 41) 00:10:46.287 18213.218 - 18318.496: 94.5555% ( 20) 00:10:46.287 18318.496 - 18423.775: 94.7593% ( 21) 00:10:46.287 18423.775 - 18529.054: 94.9825% ( 23) 00:10:46.287 18529.054 - 18634.333: 95.2057% ( 23) 00:10:46.287 18634.333 - 18739.611: 95.4095% ( 21) 00:10:46.287 18739.611 - 18844.890: 95.5939% ( 19) 00:10:46.287 18844.890 - 18950.169: 95.8754% ( 29) 00:10:46.287 18950.169 - 19055.447: 96.2539% ( 39) 00:10:46.287 19055.447 - 19160.726: 96.7974% ( 56) 00:10:46.287 19160.726 - 19266.005: 97.0303% ( 24) 00:10:46.287 19266.005 - 19371.284: 97.3311% ( 31) 00:10:46.287 19371.284 - 19476.562: 97.5932% ( 27) 00:10:46.287 19476.562 - 19581.841: 97.7290% ( 14) 00:10:46.287 19581.841 - 19687.120: 97.8261% ( 10) 00:10:46.287 19687.120 - 19792.398: 97.8940% ( 7) 00:10:46.287 19792.398 - 19897.677: 97.9425% ( 5) 00:10:46.287 19897.677 - 20002.956: 98.0396% ( 10) 00:10:46.287 20002.956 - 20108.235: 98.1269% ( 9) 00:10:46.287 20108.235 - 20213.513: 98.2046% ( 8) 00:10:46.287 20213.513 - 20318.792: 98.2822% ( 8) 00:10:46.287 20318.792 - 20424.071: 98.3599% ( 8) 00:10:46.287 20424.071 - 20529.349: 98.3987% ( 4) 00:10:46.287 20529.349 - 20634.628: 98.4375% ( 4) 00:10:46.287 20634.628 - 20739.907: 98.4763% ( 4) 00:10:46.287 20739.907 - 20845.186: 98.5151% ( 4) 00:10:46.287 20845.186 - 20950.464: 98.5443% ( 3) 00:10:46.287 20950.464 - 21055.743: 98.5831% ( 4) 00:10:46.287 21055.743 - 21161.022: 98.6219% ( 4) 00:10:46.287 21161.022 - 21266.300: 98.6413% ( 2) 00:10:46.287 21266.300 - 21371.579: 98.6801% ( 4) 00:10:46.287 21371.579 - 21476.858: 98.7092% ( 3) 00:10:46.287 21476.858 - 21582.137: 98.7481% ( 4) 00:10:46.287 21582.137 - 21687.415: 98.7578% ( 1) 00:10:46.287 28425.253 - 28635.810: 98.8063% ( 5) 00:10:46.287 28635.810 - 28846.368: 98.8548% ( 5) 00:10:46.287 28846.368 - 29056.925: 98.9033% ( 5) 00:10:46.287 29056.925 - 29267.483: 98.9519% ( 5) 00:10:46.287 29267.483 - 29478.040: 99.0004% ( 5) 00:10:46.287 29478.040 - 29688.598: 99.0489% ( 5) 00:10:46.287 29688.598 - 29899.155: 99.1071% ( 6) 00:10:46.287 29899.155 - 30109.712: 99.1460% ( 4) 00:10:46.287 30109.712 - 30320.270: 99.1945% ( 5) 00:10:46.287 30320.270 - 30530.827: 99.2333% ( 4) 00:10:46.287 30530.827 - 30741.385: 99.2915% ( 6) 00:10:46.287 30741.385 - 30951.942: 99.3401% ( 5) 00:10:46.287 30951.942 - 31162.500: 99.3789% ( 4) 00:10:46.287 38532.010 - 38742.567: 99.4080% ( 3) 00:10:46.287 38742.567 - 38953.124: 99.4565% ( 5) 00:10:46.287 38953.124 - 39163.682: 99.5148% ( 6) 00:10:46.287 39163.682 - 39374.239: 99.5730% ( 6) 00:10:46.287 39374.239 - 39584.797: 99.6215% ( 5) 00:10:46.287 39584.797 - 39795.354: 99.6700% ( 5) 00:10:46.287 39795.354 - 40005.912: 99.7283% ( 6) 00:10:46.287 40005.912 - 40216.469: 99.7865% ( 6) 00:10:46.287 40216.469 - 40427.027: 99.8447% ( 6) 00:10:46.287 40427.027 - 40637.584: 99.8835% ( 4) 00:10:46.287 40637.584 - 40848.141: 99.9418% ( 6) 00:10:46.287 40848.141 - 41058.699: 100.0000% ( 6) 00:10:46.287 00:10:46.287 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:46.287 ============================================================================== 00:10:46.287 Range in us Cumulative IO count 00:10:46.287 7369.510 - 7422.149: 0.0096% ( 1) 00:10:46.287 7422.149 - 7474.789: 0.0193% ( 1) 00:10:46.287 7474.789 - 7527.428: 0.0289% ( 1) 00:10:46.287 7580.067 - 7632.707: 0.0868% ( 6) 00:10:46.287 7632.707 - 7685.346: 0.2315% ( 15) 00:10:46.287 7685.346 - 7737.986: 0.4147% ( 19) 00:10:46.287 7737.986 - 7790.625: 0.6559% ( 25) 00:10:46.287 7790.625 - 7843.264: 0.9066% ( 26) 00:10:46.287 7843.264 - 7895.904: 1.0417% ( 14) 00:10:46.287 7895.904 - 7948.543: 1.1767% ( 14) 00:10:46.287 7948.543 - 8001.182: 1.4371% ( 27) 00:10:46.287 8001.182 - 8053.822: 1.7168% ( 29) 00:10:46.287 8053.822 - 8106.461: 2.2955% ( 60) 00:10:46.287 8106.461 - 8159.100: 2.6042% ( 32) 00:10:46.287 8159.100 - 8211.740: 2.9514% ( 36) 00:10:46.287 8211.740 - 8264.379: 3.3372% ( 40) 00:10:46.287 8264.379 - 8317.018: 3.6362% ( 31) 00:10:46.287 8317.018 - 8369.658: 3.9834% ( 36) 00:10:46.287 8369.658 - 8422.297: 4.4078% ( 44) 00:10:46.287 8422.297 - 8474.937: 4.8804% ( 49) 00:10:46.287 8474.937 - 8527.576: 5.1987% ( 33) 00:10:46.287 8527.576 - 8580.215: 5.5748% ( 39) 00:10:46.287 8580.215 - 8632.855: 5.9124% ( 35) 00:10:46.287 8632.855 - 8685.494: 6.1343% ( 23) 00:10:46.287 8685.494 - 8738.133: 6.3561% ( 23) 00:10:46.287 8738.133 - 8790.773: 6.6358% ( 29) 00:10:46.287 8790.773 - 8843.412: 6.7901% ( 16) 00:10:46.287 8843.412 - 8896.051: 6.9830% ( 20) 00:10:46.287 8896.051 - 8948.691: 7.3688% ( 40) 00:10:46.287 8948.691 - 9001.330: 7.7257% ( 37) 00:10:46.287 9001.330 - 9053.969: 8.2369% ( 53) 00:10:46.287 9053.969 - 9106.609: 8.9988% ( 79) 00:10:46.287 9106.609 - 9159.248: 10.0694% ( 111) 00:10:46.287 9159.248 - 9211.888: 11.0822% ( 105) 00:10:46.287 9211.888 - 9264.527: 12.0949% ( 105) 00:10:46.287 9264.527 - 9317.166: 12.9244% ( 86) 00:10:46.287 9317.166 - 9369.806: 13.7828% ( 89) 00:10:46.287 9369.806 - 9422.445: 14.5255% ( 77) 00:10:46.287 9422.445 - 9475.084: 15.5575% ( 107) 00:10:46.287 9475.084 - 9527.724: 16.6474% ( 113) 00:10:46.287 9527.724 - 9580.363: 17.4672% ( 85) 00:10:46.287 9580.363 - 9633.002: 18.5282% ( 110) 00:10:46.287 9633.002 - 9685.642: 19.5120% ( 102) 00:10:46.287 9685.642 - 9738.281: 20.4090% ( 93) 00:10:46.287 9738.281 - 9790.920: 21.3445% ( 97) 00:10:46.287 9790.920 - 9843.560: 22.3862% ( 108) 00:10:46.287 9843.560 - 9896.199: 23.5822% ( 124) 00:10:46.287 9896.199 - 9948.839: 24.9325% ( 140) 00:10:46.287 9948.839 - 10001.478: 26.3792% ( 150) 00:10:46.287 10001.478 - 10054.117: 27.6910% ( 136) 00:10:46.287 10054.117 - 10106.757: 28.5590% ( 90) 00:10:46.287 10106.757 - 10159.396: 29.6875% ( 117) 00:10:46.287 10159.396 - 10212.035: 30.7388% ( 109) 00:10:46.287 10212.035 - 10264.675: 31.3947% ( 68) 00:10:46.287 10264.675 - 10317.314: 32.0891% ( 72) 00:10:46.287 10317.314 - 10369.953: 32.6100% ( 54) 00:10:46.287 10369.953 - 10422.593: 32.9572% ( 36) 00:10:46.287 10422.593 - 10475.232: 33.2369% ( 29) 00:10:46.287 10475.232 - 10527.871: 33.4780% ( 25) 00:10:46.287 10527.871 - 10580.511: 33.6613% ( 19) 00:10:46.287 10580.511 - 10633.150: 33.8156% ( 16) 00:10:46.287 10633.150 - 10685.790: 33.8831% ( 7) 00:10:46.287 10685.790 - 10738.429: 34.0085% ( 13) 00:10:46.287 10738.429 - 10791.068: 34.1725% ( 17) 00:10:46.287 10791.068 - 10843.708: 34.3268% ( 16) 00:10:46.287 10843.708 - 10896.347: 34.5486% ( 23) 00:10:46.287 10896.347 - 10948.986: 34.9633% ( 43) 00:10:46.287 10948.986 - 11001.626: 35.2334% ( 28) 00:10:46.287 11001.626 - 11054.265: 35.5228% ( 30) 00:10:46.287 11054.265 - 11106.904: 36.0629% ( 56) 00:10:46.287 11106.904 - 11159.544: 36.4969% ( 45) 00:10:46.287 11159.544 - 11212.183: 37.0467% ( 57) 00:10:46.287 11212.183 - 11264.822: 37.4614% ( 43) 00:10:46.287 11264.822 - 11317.462: 38.0980% ( 66) 00:10:46.287 11317.462 - 11370.101: 38.9950% ( 93) 00:10:46.287 11370.101 - 11422.741: 39.9016% ( 94) 00:10:46.287 11422.741 - 11475.380: 40.9722% ( 111) 00:10:46.287 11475.380 - 11528.019: 42.2743% ( 135) 00:10:46.287 11528.019 - 11580.659: 43.9333% ( 172) 00:10:46.287 11580.659 - 11633.298: 45.5440% ( 167) 00:10:46.287 11633.298 - 11685.937: 47.6852% ( 222) 00:10:46.287 11685.937 - 11738.577: 49.7492% ( 214) 00:10:46.287 11738.577 - 11791.216: 51.7168% ( 204) 00:10:46.287 11791.216 - 11843.855: 53.5590% ( 191) 00:10:46.287 11843.855 - 11896.495: 54.9961% ( 149) 00:10:46.287 11896.495 - 11949.134: 56.6551% ( 172) 00:10:46.287 11949.134 - 12001.773: 58.1308% ( 153) 00:10:46.287 12001.773 - 12054.413: 59.8283% ( 176) 00:10:46.287 12054.413 - 12107.052: 61.0725% ( 129) 00:10:46.287 12107.052 - 12159.692: 62.1238% ( 109) 00:10:46.287 12159.692 - 12212.331: 62.8183% ( 72) 00:10:46.287 12212.331 - 12264.970: 63.4645% ( 67) 00:10:46.287 12264.970 - 12317.610: 64.1107% ( 67) 00:10:46.287 12317.610 - 12370.249: 64.8920% ( 81) 00:10:46.287 12370.249 - 12422.888: 65.5768% ( 71) 00:10:46.287 12422.888 - 12475.528: 66.1458% ( 59) 00:10:46.287 12475.528 - 12528.167: 66.6474% ( 52) 00:10:46.287 12528.167 - 12580.806: 67.0332% ( 40) 00:10:46.287 12580.806 - 12633.446: 67.5926% ( 58) 00:10:46.287 12633.446 - 12686.085: 68.1520% ( 58) 00:10:46.287 12686.085 - 12738.724: 68.5475% ( 41) 00:10:46.287 12738.724 - 12791.364: 69.1262% ( 60) 00:10:46.287 12791.364 - 12844.003: 69.8206% ( 72) 00:10:46.287 12844.003 - 12896.643: 70.2836% ( 48) 00:10:46.287 12896.643 - 12949.282: 70.6501% ( 38) 00:10:46.287 12949.282 - 13001.921: 71.0455% ( 41) 00:10:46.287 13001.921 - 13054.561: 71.5471% ( 52) 00:10:46.287 13054.561 - 13107.200: 71.9715% ( 44) 00:10:46.287 13107.200 - 13159.839: 72.2512% ( 29) 00:10:46.287 13159.839 - 13212.479: 72.5598% ( 32) 00:10:46.287 13212.479 - 13265.118: 72.9167% ( 37) 00:10:46.287 13265.118 - 13317.757: 73.2060% ( 30) 00:10:46.287 13317.757 - 13370.397: 73.5725% ( 38) 00:10:46.287 13370.397 - 13423.036: 73.9101% ( 35) 00:10:46.287 13423.036 - 13475.676: 74.4888% ( 60) 00:10:46.287 13475.676 - 13580.954: 75.4244% ( 97) 00:10:46.287 13580.954 - 13686.233: 76.3407% ( 95) 00:10:46.287 13686.233 - 13791.512: 76.8036% ( 48) 00:10:46.287 13791.512 - 13896.790: 77.2377% ( 45) 00:10:46.287 13896.790 - 14002.069: 77.7392% ( 52) 00:10:46.287 14002.069 - 14107.348: 78.2890% ( 57) 00:10:46.287 14107.348 - 14212.627: 78.9255% ( 66) 00:10:46.287 14212.627 - 14317.905: 79.2438% ( 33) 00:10:46.287 14317.905 - 14423.184: 79.6296% ( 40) 00:10:46.287 14423.184 - 14528.463: 80.0540% ( 44) 00:10:46.287 14528.463 - 14633.741: 80.4398% ( 40) 00:10:46.287 14633.741 - 14739.020: 81.0282% ( 61) 00:10:46.287 14739.020 - 14844.299: 81.7612% ( 76) 00:10:46.287 14844.299 - 14949.578: 82.4171% ( 68) 00:10:46.287 14949.578 - 15054.856: 83.5359% ( 116) 00:10:46.287 15054.856 - 15160.135: 84.6836% ( 119) 00:10:46.287 15160.135 - 15265.414: 85.2623% ( 60) 00:10:46.287 15265.414 - 15370.692: 85.8314% ( 59) 00:10:46.287 15370.692 - 15475.971: 86.5644% ( 76) 00:10:46.287 15475.971 - 15581.250: 87.0467% ( 50) 00:10:46.287 15581.250 - 15686.529: 87.4614% ( 43) 00:10:46.287 15686.529 - 15791.807: 88.0112% ( 57) 00:10:46.287 15791.807 - 15897.086: 88.2716% ( 27) 00:10:46.287 15897.086 - 16002.365: 88.5610% ( 30) 00:10:46.287 16002.365 - 16107.643: 88.7346% ( 18) 00:10:46.287 16107.643 - 16212.922: 88.9082% ( 18) 00:10:46.287 16212.922 - 16318.201: 89.0818% ( 18) 00:10:46.287 16318.201 - 16423.480: 89.2168% ( 14) 00:10:46.287 16423.480 - 16528.758: 89.4290% ( 22) 00:10:46.287 16528.758 - 16634.037: 89.7377% ( 32) 00:10:46.287 16634.037 - 16739.316: 89.9595% ( 23) 00:10:46.287 16739.316 - 16844.594: 90.3356% ( 39) 00:10:46.287 16844.594 - 16949.873: 90.8758% ( 56) 00:10:46.287 16949.873 - 17055.152: 91.2423% ( 38) 00:10:46.287 17055.152 - 17160.431: 91.5509% ( 32) 00:10:46.287 17160.431 - 17265.709: 92.0428% ( 51) 00:10:46.287 17265.709 - 17370.988: 92.3515% ( 32) 00:10:46.287 17370.988 - 17476.267: 92.5637% ( 22) 00:10:46.287 17476.267 - 17581.545: 92.8144% ( 26) 00:10:46.287 17581.545 - 17686.824: 93.1424% ( 34) 00:10:46.287 17686.824 - 17792.103: 93.4028% ( 27) 00:10:46.287 17792.103 - 17897.382: 93.6439% ( 25) 00:10:46.287 17897.382 - 18002.660: 93.9043% ( 27) 00:10:46.287 18002.660 - 18107.939: 94.0972% ( 20) 00:10:46.287 18107.939 - 18213.218: 94.3673% ( 28) 00:10:46.287 18213.218 - 18318.496: 94.6084% ( 25) 00:10:46.287 18318.496 - 18423.775: 94.9363% ( 34) 00:10:46.287 18423.775 - 18529.054: 95.1100% ( 18) 00:10:46.287 18529.054 - 18634.333: 95.2450% ( 14) 00:10:46.287 18634.333 - 18739.611: 95.5922% ( 36) 00:10:46.287 18739.611 - 18844.890: 95.9201% ( 34) 00:10:46.287 18844.890 - 18950.169: 96.1130% ( 20) 00:10:46.287 18950.169 - 19055.447: 96.3059% ( 20) 00:10:46.287 19055.447 - 19160.726: 96.7689% ( 48) 00:10:46.287 19160.726 - 19266.005: 96.9522% ( 19) 00:10:46.287 19266.005 - 19371.284: 97.1161% ( 17) 00:10:46.287 19371.284 - 19476.562: 97.3380% ( 23) 00:10:46.287 19476.562 - 19581.841: 97.7623% ( 44) 00:10:46.287 19581.841 - 19687.120: 97.9456% ( 19) 00:10:46.287 19687.120 - 19792.398: 98.1192% ( 18) 00:10:46.287 19792.398 - 19897.677: 98.2350% ( 12) 00:10:46.287 19897.677 - 20002.956: 98.4279% ( 20) 00:10:46.287 20002.956 - 20108.235: 98.6497% ( 23) 00:10:46.287 20108.235 - 20213.513: 98.8040% ( 16) 00:10:46.287 20213.513 - 20318.792: 98.9294% ( 13) 00:10:46.287 20318.792 - 20424.071: 98.9680% ( 4) 00:10:46.287 20424.071 - 20529.349: 99.0162% ( 5) 00:10:46.287 20529.349 - 20634.628: 99.0548% ( 4) 00:10:46.287 20634.628 - 20739.907: 99.0934% ( 4) 00:10:46.287 20739.907 - 20845.186: 99.1416% ( 5) 00:10:46.287 20845.186 - 20950.464: 99.1705% ( 3) 00:10:46.287 20950.464 - 21055.743: 99.2091% ( 4) 00:10:46.287 21055.743 - 21161.022: 99.2477% ( 4) 00:10:46.287 21161.022 - 21266.300: 99.2766% ( 3) 00:10:46.287 21266.300 - 21371.579: 99.3152% ( 4) 00:10:46.287 21371.579 - 21476.858: 99.3538% ( 4) 00:10:46.287 21476.858 - 21582.137: 99.3827% ( 3) 00:10:46.287 28425.253 - 28635.810: 99.4213% ( 4) 00:10:46.287 28635.810 - 28846.368: 99.4792% ( 6) 00:10:46.287 28846.368 - 29056.925: 99.5177% ( 4) 00:10:46.287 29056.925 - 29267.483: 99.5563% ( 4) 00:10:46.287 29267.483 - 29478.040: 99.6046% ( 5) 00:10:46.287 29478.040 - 29688.598: 99.6528% ( 5) 00:10:46.287 29688.598 - 29899.155: 99.7106% ( 6) 00:10:46.287 29899.155 - 30109.712: 99.7589% ( 5) 00:10:46.287 30109.712 - 30320.270: 99.8071% ( 5) 00:10:46.287 30320.270 - 30530.827: 99.8553% ( 5) 00:10:46.287 30530.827 - 30741.385: 99.9035% ( 5) 00:10:46.287 30741.385 - 30951.942: 99.9614% ( 6) 00:10:46.287 30951.942 - 31162.500: 100.0000% ( 4) 00:10:46.287 00:10:46.287 20:17:16 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:46.287 00:10:46.287 real 0m2.648s 00:10:46.287 user 0m2.279s 00:10:46.287 sys 0m0.263s 00:10:46.287 20:17:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:46.287 20:17:16 -- common/autotest_common.sh@10 -- # set +x 00:10:46.287 ************************************ 00:10:46.287 END TEST nvme_perf 00:10:46.287 ************************************ 00:10:46.287 20:17:16 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:46.287 20:17:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:46.287 20:17:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.287 20:17:16 -- common/autotest_common.sh@10 -- # set +x 00:10:46.287 ************************************ 00:10:46.287 START TEST nvme_hello_world 00:10:46.287 ************************************ 00:10:46.287 20:17:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:46.546 Initializing NVMe Controllers 00:10:46.546 Attached to 0000:00:10.0 00:10:46.546 Namespace ID: 1 size: 6GB 00:10:46.546 Attached to 0000:00:11.0 00:10:46.546 Namespace ID: 1 size: 5GB 00:10:46.546 Attached to 0000:00:13.0 00:10:46.546 Namespace ID: 1 size: 1GB 00:10:46.546 Attached to 0000:00:12.0 00:10:46.546 Namespace ID: 1 size: 4GB 00:10:46.546 Namespace ID: 2 size: 4GB 00:10:46.546 Namespace ID: 3 size: 4GB 00:10:46.546 Initialization complete. 00:10:46.546 INFO: using host memory buffer for IO 00:10:46.546 Hello world! 00:10:46.546 INFO: using host memory buffer for IO 00:10:46.546 Hello world! 00:10:46.546 INFO: using host memory buffer for IO 00:10:46.546 Hello world! 00:10:46.546 INFO: using host memory buffer for IO 00:10:46.546 Hello world! 00:10:46.546 INFO: using host memory buffer for IO 00:10:46.546 Hello world! 00:10:46.546 INFO: using host memory buffer for IO 00:10:46.546 Hello world! 00:10:46.546 ************************************ 00:10:46.546 END TEST nvme_hello_world 00:10:46.546 ************************************ 00:10:46.546 00:10:46.546 real 0m0.329s 00:10:46.546 user 0m0.126s 00:10:46.546 sys 0m0.140s 00:10:46.546 20:17:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:46.546 20:17:16 -- common/autotest_common.sh@10 -- # set +x 00:10:46.546 20:17:16 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:46.546 20:17:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:46.546 20:17:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:46.546 20:17:16 -- common/autotest_common.sh@10 -- # set +x 00:10:46.546 ************************************ 00:10:46.546 START TEST nvme_sgl 00:10:46.546 ************************************ 00:10:46.546 20:17:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:46.805 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:46.805 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:46.805 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:46.805 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:46.805 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:46.805 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:46.805 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:46.805 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:46.805 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:46.805 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:46.805 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:46.805 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:46.805 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:46.805 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:46.805 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:46.805 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:46.805 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:46.805 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:46.805 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:46.805 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:46.805 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:46.805 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:46.805 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:46.805 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:46.805 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:46.805 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:46.805 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:46.805 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:46.805 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:46.805 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:46.805 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:46.805 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:46.805 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:46.805 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:46.805 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:46.805 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:47.070 NVMe Readv/Writev Request test 00:10:47.070 Attached to 0000:00:10.0 00:10:47.070 Attached to 0000:00:11.0 00:10:47.070 Attached to 0000:00:13.0 00:10:47.070 Attached to 0000:00:12.0 00:10:47.070 0000:00:10.0: build_io_request_2 test passed 00:10:47.070 0000:00:10.0: build_io_request_4 test passed 00:10:47.070 0000:00:10.0: build_io_request_5 test passed 00:10:47.070 0000:00:10.0: build_io_request_6 test passed 00:10:47.070 0000:00:10.0: build_io_request_7 test passed 00:10:47.070 0000:00:10.0: build_io_request_10 test passed 00:10:47.070 0000:00:11.0: build_io_request_2 test passed 00:10:47.070 0000:00:11.0: build_io_request_4 test passed 00:10:47.070 0000:00:11.0: build_io_request_5 test passed 00:10:47.070 0000:00:11.0: build_io_request_6 test passed 00:10:47.070 0000:00:11.0: build_io_request_7 test passed 00:10:47.070 0000:00:11.0: build_io_request_10 test passed 00:10:47.070 Cleaning up... 00:10:47.070 00:10:47.070 real 0m0.347s 00:10:47.070 user 0m0.162s 00:10:47.070 sys 0m0.141s 00:10:47.070 20:17:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:47.070 20:17:17 -- common/autotest_common.sh@10 -- # set +x 00:10:47.070 ************************************ 00:10:47.070 END TEST nvme_sgl 00:10:47.070 ************************************ 00:10:47.070 20:17:17 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:47.070 20:17:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:47.070 20:17:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.070 20:17:17 -- common/autotest_common.sh@10 -- # set +x 00:10:47.070 ************************************ 00:10:47.070 START TEST nvme_e2edp 00:10:47.070 ************************************ 00:10:47.070 20:17:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:47.329 NVMe Write/Read with End-to-End data protection test 00:10:47.329 Attached to 0000:00:10.0 00:10:47.329 Attached to 0000:00:11.0 00:10:47.329 Attached to 0000:00:13.0 00:10:47.329 Attached to 0000:00:12.0 00:10:47.329 Cleaning up... 00:10:47.329 00:10:47.329 real 0m0.286s 00:10:47.329 user 0m0.092s 00:10:47.329 sys 0m0.145s 00:10:47.329 20:17:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:47.329 20:17:17 -- common/autotest_common.sh@10 -- # set +x 00:10:47.329 ************************************ 00:10:47.329 END TEST nvme_e2edp 00:10:47.329 ************************************ 00:10:47.329 20:17:17 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:47.329 20:17:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:47.329 20:17:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.329 20:17:17 -- common/autotest_common.sh@10 -- # set +x 00:10:47.587 ************************************ 00:10:47.587 START TEST nvme_reserve 00:10:47.587 ************************************ 00:10:47.587 20:17:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:47.846 ===================================================== 00:10:47.846 NVMe Controller at PCI bus 0, device 16, function 0 00:10:47.846 ===================================================== 00:10:47.846 Reservations: Not Supported 00:10:47.846 ===================================================== 00:10:47.846 NVMe Controller at PCI bus 0, device 17, function 0 00:10:47.846 ===================================================== 00:10:47.846 Reservations: Not Supported 00:10:47.846 ===================================================== 00:10:47.846 NVMe Controller at PCI bus 0, device 19, function 0 00:10:47.846 ===================================================== 00:10:47.846 Reservations: Not Supported 00:10:47.846 ===================================================== 00:10:47.846 NVMe Controller at PCI bus 0, device 18, function 0 00:10:47.846 ===================================================== 00:10:47.846 Reservations: Not Supported 00:10:47.846 Reservation test passed 00:10:47.846 00:10:47.846 real 0m0.277s 00:10:47.846 user 0m0.096s 00:10:47.846 sys 0m0.139s 00:10:47.846 ************************************ 00:10:47.846 END TEST nvme_reserve 00:10:47.846 ************************************ 00:10:47.846 20:17:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:47.846 20:17:17 -- common/autotest_common.sh@10 -- # set +x 00:10:47.846 20:17:17 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:47.846 20:17:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:47.846 20:17:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:47.846 20:17:17 -- common/autotest_common.sh@10 -- # set +x 00:10:47.846 ************************************ 00:10:47.846 START TEST nvme_err_injection 00:10:47.846 ************************************ 00:10:47.846 20:17:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:48.105 NVMe Error Injection test 00:10:48.105 Attached to 0000:00:10.0 00:10:48.105 Attached to 0000:00:11.0 00:10:48.105 Attached to 0000:00:13.0 00:10:48.105 Attached to 0000:00:12.0 00:10:48.105 0000:00:10.0: get features failed as expected 00:10:48.105 0000:00:11.0: get features failed as expected 00:10:48.105 0000:00:13.0: get features failed as expected 00:10:48.105 0000:00:12.0: get features failed as expected 00:10:48.105 0000:00:10.0: get features successfully as expected 00:10:48.105 0000:00:11.0: get features successfully as expected 00:10:48.105 0000:00:13.0: get features successfully as expected 00:10:48.105 0000:00:12.0: get features successfully as expected 00:10:48.105 0000:00:10.0: read failed as expected 00:10:48.105 0000:00:11.0: read failed as expected 00:10:48.105 0000:00:13.0: read failed as expected 00:10:48.105 0000:00:12.0: read failed as expected 00:10:48.105 0000:00:10.0: read successfully as expected 00:10:48.105 0000:00:11.0: read successfully as expected 00:10:48.105 0000:00:13.0: read successfully as expected 00:10:48.105 0000:00:12.0: read successfully as expected 00:10:48.105 Cleaning up... 00:10:48.105 00:10:48.105 real 0m0.290s 00:10:48.105 user 0m0.111s 00:10:48.105 sys 0m0.136s 00:10:48.105 ************************************ 00:10:48.105 END TEST nvme_err_injection 00:10:48.105 ************************************ 00:10:48.105 20:17:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:48.105 20:17:18 -- common/autotest_common.sh@10 -- # set +x 00:10:48.365 20:17:18 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:48.365 20:17:18 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:10:48.365 20:17:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:48.365 20:17:18 -- common/autotest_common.sh@10 -- # set +x 00:10:48.365 ************************************ 00:10:48.365 START TEST nvme_overhead 00:10:48.365 ************************************ 00:10:48.365 20:17:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:49.740 Initializing NVMe Controllers 00:10:49.740 Attached to 0000:00:10.0 00:10:49.740 Attached to 0000:00:11.0 00:10:49.740 Attached to 0000:00:13.0 00:10:49.740 Attached to 0000:00:12.0 00:10:49.740 Initialization complete. Launching workers. 00:10:49.740 submit (in ns) avg, min, max = 15547.0, 12456.2, 109916.5 00:10:49.740 complete (in ns) avg, min, max = 10151.7, 7757.4, 162904.4 00:10:49.740 00:10:49.740 Submit histogram 00:10:49.740 ================ 00:10:49.740 Range in us Cumulative Count 00:10:49.740 12.440 - 12.492: 0.0307% ( 2) 00:10:49.740 12.492 - 12.543: 0.0768% ( 3) 00:10:49.740 12.543 - 12.594: 0.1382% ( 4) 00:10:49.740 12.594 - 12.646: 0.2304% ( 6) 00:10:49.740 12.646 - 12.697: 0.6452% ( 27) 00:10:49.740 12.697 - 12.749: 1.0906% ( 29) 00:10:49.740 12.749 - 12.800: 1.6283% ( 35) 00:10:49.740 12.800 - 12.851: 2.2273% ( 39) 00:10:49.740 12.851 - 12.903: 2.7189% ( 32) 00:10:49.740 12.903 - 12.954: 3.5484% ( 54) 00:10:49.740 12.954 - 13.006: 4.0860% ( 35) 00:10:49.740 13.006 - 13.057: 4.7158% ( 41) 00:10:49.740 13.057 - 13.108: 5.6682% ( 62) 00:10:49.740 13.108 - 13.160: 6.2826% ( 40) 00:10:49.740 13.160 - 13.263: 7.8802% ( 104) 00:10:49.740 13.263 - 13.365: 9.7849% ( 124) 00:10:49.740 13.365 - 13.468: 12.0276% ( 146) 00:10:49.740 13.468 - 13.571: 15.1459% ( 203) 00:10:49.740 13.571 - 13.674: 18.7711% ( 236) 00:10:49.740 13.674 - 13.777: 22.4117% ( 237) 00:10:49.740 13.777 - 13.880: 25.8679% ( 225) 00:10:49.740 13.880 - 13.982: 29.8925% ( 262) 00:10:49.740 13.982 - 14.085: 33.8556% ( 258) 00:10:49.740 14.085 - 14.188: 38.3257% ( 291) 00:10:49.740 14.188 - 14.291: 43.8556% ( 360) 00:10:49.740 14.291 - 14.394: 49.5853% ( 373) 00:10:49.740 14.394 - 14.496: 54.8387% ( 342) 00:10:49.740 14.496 - 14.599: 59.1244% ( 279) 00:10:49.740 14.599 - 14.702: 63.0108% ( 253) 00:10:49.740 14.702 - 14.805: 66.3287% ( 216) 00:10:49.740 14.805 - 14.908: 69.2320% ( 189) 00:10:49.741 14.908 - 15.010: 72.0737% ( 185) 00:10:49.741 15.010 - 15.113: 74.9309% ( 186) 00:10:49.741 15.113 - 15.216: 77.5269% ( 169) 00:10:49.741 15.216 - 15.319: 79.6160% ( 136) 00:10:49.741 15.319 - 15.422: 81.2289% ( 105) 00:10:49.741 15.422 - 15.524: 82.3963% ( 76) 00:10:49.741 15.524 - 15.627: 83.2565% ( 56) 00:10:49.741 15.627 - 15.730: 83.9171% ( 43) 00:10:49.741 15.730 - 15.833: 84.3318% ( 27) 00:10:49.741 15.833 - 15.936: 84.6697% ( 22) 00:10:49.741 15.936 - 16.039: 84.7465% ( 5) 00:10:49.741 16.039 - 16.141: 84.8541% ( 7) 00:10:49.741 16.141 - 16.244: 84.9002% ( 3) 00:10:49.741 16.244 - 16.347: 85.0230% ( 8) 00:10:49.741 16.347 - 16.450: 85.0691% ( 3) 00:10:49.741 16.450 - 16.553: 85.1152% ( 3) 00:10:49.741 16.553 - 16.655: 85.1459% ( 2) 00:10:49.741 16.758 - 16.861: 85.1613% ( 1) 00:10:49.741 16.861 - 16.964: 85.1767% ( 1) 00:10:49.741 16.964 - 17.067: 85.2074% ( 2) 00:10:49.741 17.067 - 17.169: 85.2535% ( 3) 00:10:49.741 17.169 - 17.272: 85.3303% ( 5) 00:10:49.741 17.272 - 17.375: 85.4071% ( 5) 00:10:49.741 17.375 - 17.478: 85.4531% ( 3) 00:10:49.741 17.478 - 17.581: 85.4992% ( 3) 00:10:49.741 17.581 - 17.684: 85.6682% ( 11) 00:10:49.741 17.684 - 17.786: 85.7911% ( 8) 00:10:49.741 17.786 - 17.889: 85.9754% ( 12) 00:10:49.741 17.889 - 17.992: 86.2980% ( 21) 00:10:49.741 17.992 - 18.095: 86.5284% ( 15) 00:10:49.741 18.095 - 18.198: 86.8817% ( 23) 00:10:49.741 18.198 - 18.300: 87.2350% ( 23) 00:10:49.741 18.300 - 18.403: 87.4654% ( 15) 00:10:49.741 18.403 - 18.506: 87.7727% ( 20) 00:10:49.741 18.506 - 18.609: 88.1413% ( 24) 00:10:49.741 18.609 - 18.712: 88.6329% ( 32) 00:10:49.741 18.712 - 18.814: 88.8940% ( 17) 00:10:49.741 18.814 - 18.917: 89.2934% ( 26) 00:10:49.741 18.917 - 19.020: 89.7849% ( 32) 00:10:49.741 19.020 - 19.123: 90.1382% ( 23) 00:10:49.741 19.123 - 19.226: 90.4916% ( 23) 00:10:49.741 19.226 - 19.329: 90.8602% ( 24) 00:10:49.741 19.329 - 19.431: 91.1060% ( 16) 00:10:49.741 19.431 - 19.534: 91.4286% ( 21) 00:10:49.741 19.534 - 19.637: 91.6743% ( 16) 00:10:49.741 19.637 - 19.740: 91.8894% ( 14) 00:10:49.741 19.740 - 19.843: 92.1505% ( 17) 00:10:49.741 19.843 - 19.945: 92.3349% ( 12) 00:10:49.741 19.945 - 20.048: 92.6114% ( 18) 00:10:49.741 20.048 - 20.151: 92.9647% ( 23) 00:10:49.741 20.151 - 20.254: 93.1183% ( 10) 00:10:49.741 20.254 - 20.357: 93.3641% ( 16) 00:10:49.741 20.357 - 20.459: 93.6098% ( 16) 00:10:49.741 20.459 - 20.562: 93.7020% ( 6) 00:10:49.741 20.562 - 20.665: 93.8863% ( 12) 00:10:49.741 20.665 - 20.768: 94.1167% ( 15) 00:10:49.741 20.768 - 20.871: 94.2550% ( 9) 00:10:49.741 20.871 - 20.973: 94.4086% ( 10) 00:10:49.741 20.973 - 21.076: 94.5622% ( 10) 00:10:49.741 21.076 - 21.179: 94.6237% ( 4) 00:10:49.741 21.179 - 21.282: 94.6851% ( 4) 00:10:49.741 21.282 - 21.385: 94.7926% ( 7) 00:10:49.741 21.385 - 21.488: 94.8694% ( 5) 00:10:49.741 21.488 - 21.590: 94.9309% ( 4) 00:10:49.741 21.590 - 21.693: 95.0384% ( 7) 00:10:49.741 21.693 - 21.796: 95.1613% ( 8) 00:10:49.741 21.796 - 21.899: 95.1767% ( 1) 00:10:49.741 21.899 - 22.002: 95.2842% ( 7) 00:10:49.741 22.002 - 22.104: 95.3303% ( 3) 00:10:49.741 22.104 - 22.207: 95.4071% ( 5) 00:10:49.741 22.207 - 22.310: 95.4224% ( 1) 00:10:49.741 22.310 - 22.413: 95.4839% ( 4) 00:10:49.741 22.413 - 22.516: 95.5300% ( 3) 00:10:49.741 22.516 - 22.618: 95.5607% ( 2) 00:10:49.741 22.618 - 22.721: 95.5914% ( 2) 00:10:49.741 22.721 - 22.824: 95.6375% ( 3) 00:10:49.741 22.824 - 22.927: 95.6528% ( 1) 00:10:49.741 22.927 - 23.030: 95.6836% ( 2) 00:10:49.741 23.133 - 23.235: 95.6989% ( 1) 00:10:49.741 23.338 - 23.441: 95.7143% ( 1) 00:10:49.741 23.544 - 23.647: 95.7450% ( 2) 00:10:49.741 23.852 - 23.955: 95.7604% ( 1) 00:10:49.741 24.263 - 24.366: 95.7757% ( 1) 00:10:49.741 24.469 - 24.572: 95.7911% ( 1) 00:10:49.741 24.572 - 24.675: 95.8065% ( 1) 00:10:49.741 24.675 - 24.778: 95.8372% ( 2) 00:10:49.741 24.778 - 24.880: 95.8986% ( 4) 00:10:49.741 24.880 - 24.983: 96.0215% ( 8) 00:10:49.741 24.983 - 25.086: 96.0983% ( 5) 00:10:49.741 25.086 - 25.189: 96.1444% ( 3) 00:10:49.741 25.189 - 25.292: 96.2058% ( 4) 00:10:49.741 25.292 - 25.394: 96.2366% ( 2) 00:10:49.741 25.394 - 25.497: 96.2519% ( 1) 00:10:49.741 25.497 - 25.600: 96.2673% ( 1) 00:10:49.741 25.600 - 25.703: 96.3287% ( 4) 00:10:49.741 25.703 - 25.806: 96.4055% ( 5) 00:10:49.741 25.806 - 25.908: 96.5284% ( 8) 00:10:49.741 25.908 - 26.011: 96.6513% ( 8) 00:10:49.741 26.011 - 26.114: 96.7896% ( 9) 00:10:49.741 26.114 - 26.217: 96.9278% ( 9) 00:10:49.741 26.217 - 26.320: 97.0046% ( 5) 00:10:49.741 26.320 - 26.525: 97.1275% ( 8) 00:10:49.741 26.525 - 26.731: 97.1429% ( 1) 00:10:49.741 26.731 - 26.937: 97.2043% ( 4) 00:10:49.741 26.937 - 27.142: 97.2504% ( 3) 00:10:49.741 27.142 - 27.348: 97.2811% ( 2) 00:10:49.741 27.348 - 27.553: 97.3425% ( 4) 00:10:49.741 27.553 - 27.759: 97.4040% ( 4) 00:10:49.741 27.759 - 27.965: 97.4654% ( 4) 00:10:49.741 27.965 - 28.170: 97.5269% ( 4) 00:10:49.741 28.170 - 28.376: 97.6344% ( 7) 00:10:49.741 28.376 - 28.582: 97.7266% ( 6) 00:10:49.741 28.582 - 28.787: 97.7880% ( 4) 00:10:49.741 28.787 - 28.993: 97.8341% ( 3) 00:10:49.741 29.198 - 29.404: 97.8495% ( 1) 00:10:49.741 29.610 - 29.815: 97.8955% ( 3) 00:10:49.741 29.815 - 30.021: 97.9570% ( 4) 00:10:49.741 30.021 - 30.227: 98.0184% ( 4) 00:10:49.741 30.227 - 30.432: 98.0338% ( 1) 00:10:49.741 30.432 - 30.638: 98.1106% ( 5) 00:10:49.741 30.638 - 30.843: 98.1567% ( 3) 00:10:49.741 30.843 - 31.049: 98.2949% ( 9) 00:10:49.741 31.049 - 31.255: 98.3257% ( 2) 00:10:49.741 31.255 - 31.460: 98.4178% ( 6) 00:10:49.741 31.666 - 31.871: 98.4639% ( 3) 00:10:49.741 31.871 - 32.077: 98.5100% ( 3) 00:10:49.741 32.488 - 32.694: 98.5253% ( 1) 00:10:49.741 33.105 - 33.311: 98.5868% ( 4) 00:10:49.741 33.722 - 33.928: 98.6022% ( 1) 00:10:49.741 33.928 - 34.133: 98.6482% ( 3) 00:10:49.741 34.133 - 34.339: 98.6943% ( 3) 00:10:49.741 34.339 - 34.545: 98.7250% ( 2) 00:10:49.741 34.545 - 34.750: 98.7404% ( 1) 00:10:49.741 34.956 - 35.161: 98.7558% ( 1) 00:10:49.741 35.984 - 36.190: 98.7711% ( 1) 00:10:49.741 36.190 - 36.395: 98.7865% ( 1) 00:10:49.741 36.395 - 36.601: 98.8018% ( 1) 00:10:49.741 36.601 - 36.806: 98.8172% ( 1) 00:10:49.741 37.012 - 37.218: 98.8479% ( 2) 00:10:49.741 37.218 - 37.423: 98.8940% ( 3) 00:10:49.741 37.423 - 37.629: 99.0476% ( 10) 00:10:49.741 37.629 - 37.835: 99.2627% ( 14) 00:10:49.741 37.835 - 38.040: 99.4163% ( 10) 00:10:49.741 38.040 - 38.246: 99.4931% ( 5) 00:10:49.741 38.246 - 38.451: 99.5392% ( 3) 00:10:49.741 38.451 - 38.657: 99.5699% ( 2) 00:10:49.741 38.657 - 38.863: 99.6313% ( 4) 00:10:49.741 38.863 - 39.068: 99.6467% ( 1) 00:10:49.741 39.068 - 39.274: 99.6928% ( 3) 00:10:49.741 39.274 - 39.480: 99.7235% ( 2) 00:10:49.741 40.096 - 40.302: 99.7389% ( 1) 00:10:49.741 40.302 - 40.508: 99.7696% ( 2) 00:10:49.741 41.330 - 41.536: 99.7849% ( 1) 00:10:49.741 42.358 - 42.564: 99.8003% ( 1) 00:10:49.741 42.564 - 42.769: 99.8157% ( 1) 00:10:49.741 42.975 - 43.181: 99.8310% ( 1) 00:10:49.741 43.592 - 43.798: 99.8464% ( 1) 00:10:49.741 46.471 - 46.676: 99.8618% ( 1) 00:10:49.741 48.733 - 48.938: 99.8771% ( 1) 00:10:49.741 49.144 - 49.349: 99.8925% ( 1) 00:10:49.741 50.583 - 50.789: 99.9078% ( 1) 00:10:49.741 56.341 - 56.752: 99.9232% ( 1) 00:10:49.741 57.163 - 57.574: 99.9386% ( 1) 00:10:49.741 57.986 - 58.397: 99.9539% ( 1) 00:10:49.741 64.565 - 64.977: 99.9693% ( 1) 00:10:49.741 71.557 - 71.968: 99.9846% ( 1) 00:10:49.741 109.391 - 110.214: 100.0000% ( 1) 00:10:49.741 00:10:49.741 Complete histogram 00:10:49.741 ================== 00:10:49.741 Range in us Cumulative Count 00:10:49.741 7.711 - 7.762: 0.0154% ( 1) 00:10:49.741 7.762 - 7.814: 0.0307% ( 1) 00:10:49.741 7.814 - 7.865: 0.3533% ( 21) 00:10:49.741 7.865 - 7.916: 0.7680% ( 27) 00:10:49.742 7.916 - 7.968: 1.1521% ( 25) 00:10:49.742 7.968 - 8.019: 1.4439% ( 19) 00:10:49.742 8.019 - 8.071: 1.6743% ( 15) 00:10:49.742 8.071 - 8.122: 1.7819% ( 7) 00:10:49.742 8.122 - 8.173: 1.8894% ( 7) 00:10:49.742 8.173 - 8.225: 2.0276% ( 9) 00:10:49.742 8.225 - 8.276: 2.9493% ( 60) 00:10:49.742 8.276 - 8.328: 5.3610% ( 157) 00:10:49.742 8.328 - 8.379: 8.0184% ( 173) 00:10:49.742 8.379 - 8.431: 10.2458% ( 145) 00:10:49.742 8.431 - 8.482: 12.1198% ( 122) 00:10:49.742 8.482 - 8.533: 13.7481% ( 106) 00:10:49.742 8.533 - 8.585: 14.9770% ( 80) 00:10:49.742 8.585 - 8.636: 16.8049% ( 119) 00:10:49.742 8.636 - 8.688: 18.4332% ( 106) 00:10:49.742 8.688 - 8.739: 20.6144% ( 142) 00:10:49.742 8.739 - 8.790: 24.2243% ( 235) 00:10:49.742 8.790 - 8.842: 29.5392% ( 346) 00:10:49.742 8.842 - 8.893: 34.6697% ( 334) 00:10:49.742 8.893 - 8.945: 38.8172% ( 270) 00:10:49.742 8.945 - 8.996: 42.4424% ( 236) 00:10:49.742 8.996 - 9.047: 46.4516% ( 261) 00:10:49.742 9.047 - 9.099: 49.8464% ( 221) 00:10:49.742 9.099 - 9.150: 52.8111% ( 193) 00:10:49.742 9.150 - 9.202: 55.1459% ( 152) 00:10:49.742 9.202 - 9.253: 57.0814% ( 126) 00:10:49.742 9.253 - 9.304: 59.3702% ( 149) 00:10:49.742 9.304 - 9.356: 61.9816% ( 170) 00:10:49.742 9.356 - 9.407: 64.8541% ( 187) 00:10:49.742 9.407 - 9.459: 68.1567% ( 215) 00:10:49.742 9.459 - 9.510: 70.3994% ( 146) 00:10:49.742 9.510 - 9.561: 72.2888% ( 123) 00:10:49.742 9.561 - 9.613: 74.2857% ( 130) 00:10:49.742 9.613 - 9.664: 76.0522% ( 115) 00:10:49.742 9.664 - 9.716: 77.5883% ( 100) 00:10:49.742 9.716 - 9.767: 79.0937% ( 98) 00:10:49.742 9.767 - 9.818: 80.1536% ( 69) 00:10:49.742 9.818 - 9.870: 81.1214% ( 63) 00:10:49.742 9.870 - 9.921: 82.0891% ( 63) 00:10:49.742 9.921 - 9.973: 82.9647% ( 57) 00:10:49.742 9.973 - 10.024: 83.4716% ( 33) 00:10:49.742 10.024 - 10.076: 83.9324% ( 30) 00:10:49.742 10.076 - 10.127: 84.4854% ( 36) 00:10:49.742 10.127 - 10.178: 84.7926% ( 20) 00:10:49.742 10.178 - 10.230: 85.1306% ( 22) 00:10:49.742 10.230 - 10.281: 85.3917% ( 17) 00:10:49.742 10.281 - 10.333: 85.5914% ( 13) 00:10:49.742 10.333 - 10.384: 85.7911% ( 13) 00:10:49.742 10.384 - 10.435: 85.9140% ( 8) 00:10:49.742 10.435 - 10.487: 86.0215% ( 7) 00:10:49.742 10.487 - 10.538: 86.1905% ( 11) 00:10:49.742 10.538 - 10.590: 86.3902% ( 13) 00:10:49.742 10.590 - 10.641: 86.4977% ( 7) 00:10:49.742 10.641 - 10.692: 86.5899% ( 6) 00:10:49.742 10.692 - 10.744: 86.6820% ( 6) 00:10:49.742 10.744 - 10.795: 86.7435% ( 4) 00:10:49.742 10.795 - 10.847: 86.7896% ( 3) 00:10:49.742 10.847 - 10.898: 86.8664% ( 5) 00:10:49.742 10.898 - 10.949: 86.9124% ( 3) 00:10:49.742 10.949 - 11.001: 86.9585% ( 3) 00:10:49.742 11.001 - 11.052: 86.9739% ( 1) 00:10:49.742 11.052 - 11.104: 87.0200% ( 3) 00:10:49.742 11.104 - 11.155: 87.0814% ( 4) 00:10:49.742 11.155 - 11.206: 87.0968% ( 1) 00:10:49.742 11.206 - 11.258: 87.1275% ( 2) 00:10:49.742 11.258 - 11.309: 87.1429% ( 1) 00:10:49.742 11.309 - 11.361: 87.1582% ( 1) 00:10:49.742 11.361 - 11.412: 87.1736% ( 1) 00:10:49.742 11.412 - 11.463: 87.2043% ( 2) 00:10:49.742 11.463 - 11.515: 87.2350% ( 2) 00:10:49.742 11.515 - 11.566: 87.2504% ( 1) 00:10:49.742 11.566 - 11.618: 87.4040% ( 10) 00:10:49.742 11.618 - 11.669: 87.5730% ( 11) 00:10:49.742 11.669 - 11.720: 87.8955% ( 21) 00:10:49.742 11.720 - 11.772: 88.1106% ( 14) 00:10:49.742 11.772 - 11.823: 88.2642% ( 10) 00:10:49.742 11.823 - 11.875: 88.4332% ( 11) 00:10:49.742 11.875 - 11.926: 88.5868% ( 10) 00:10:49.742 11.926 - 11.978: 88.6636% ( 5) 00:10:49.742 11.978 - 12.029: 88.7558% ( 6) 00:10:49.742 12.029 - 12.080: 88.9862% ( 15) 00:10:49.742 12.080 - 12.132: 89.1551% ( 11) 00:10:49.742 12.132 - 12.183: 89.3088% ( 10) 00:10:49.742 12.183 - 12.235: 89.4624% ( 10) 00:10:49.742 12.235 - 12.286: 89.5853% ( 8) 00:10:49.742 12.286 - 12.337: 89.7696% ( 12) 00:10:49.742 12.337 - 12.389: 89.9693% ( 13) 00:10:49.742 12.389 - 12.440: 90.1997% ( 15) 00:10:49.742 12.440 - 12.492: 90.3226% ( 8) 00:10:49.742 12.492 - 12.543: 90.5069% ( 12) 00:10:49.742 12.543 - 12.594: 90.6452% ( 9) 00:10:49.742 12.594 - 12.646: 90.8756% ( 15) 00:10:49.742 12.646 - 12.697: 91.0599% ( 12) 00:10:49.742 12.697 - 12.749: 91.2442% ( 12) 00:10:49.742 12.749 - 12.800: 91.4132% ( 11) 00:10:49.742 12.800 - 12.851: 91.5361% ( 8) 00:10:49.742 12.851 - 12.903: 91.7512% ( 14) 00:10:49.742 12.903 - 12.954: 91.8433% ( 6) 00:10:49.742 12.954 - 13.006: 91.9816% ( 9) 00:10:49.742 13.006 - 13.057: 92.0737% ( 6) 00:10:49.742 13.057 - 13.108: 92.1813% ( 7) 00:10:49.742 13.108 - 13.160: 92.2427% ( 4) 00:10:49.742 13.160 - 13.263: 92.4424% ( 13) 00:10:49.742 13.263 - 13.365: 92.5192% ( 5) 00:10:49.742 13.365 - 13.468: 92.7035% ( 12) 00:10:49.742 13.468 - 13.571: 92.8571% ( 10) 00:10:49.742 13.571 - 13.674: 92.9800% ( 8) 00:10:49.742 13.674 - 13.777: 93.1336% ( 10) 00:10:49.742 13.777 - 13.880: 93.2258% ( 6) 00:10:49.742 13.880 - 13.982: 93.3180% ( 6) 00:10:49.742 13.982 - 14.085: 93.3641% ( 3) 00:10:49.742 14.085 - 14.188: 93.5637% ( 13) 00:10:49.742 14.188 - 14.291: 93.6252% ( 4) 00:10:49.742 14.291 - 14.394: 93.7174% ( 6) 00:10:49.742 14.394 - 14.496: 93.7942% ( 5) 00:10:49.742 14.496 - 14.599: 93.8402% ( 3) 00:10:49.742 14.599 - 14.702: 93.9171% ( 5) 00:10:49.742 14.702 - 14.805: 93.9631% ( 3) 00:10:49.742 14.805 - 14.908: 94.0092% ( 3) 00:10:49.742 14.908 - 15.010: 94.0707% ( 4) 00:10:49.742 15.010 - 15.113: 94.2704% ( 13) 00:10:49.742 15.113 - 15.216: 94.5008% ( 15) 00:10:49.742 15.216 - 15.319: 94.6544% ( 10) 00:10:49.742 15.319 - 15.422: 94.7005% ( 3) 00:10:49.742 15.422 - 15.524: 94.7773% ( 5) 00:10:49.742 15.524 - 15.627: 94.8080% ( 2) 00:10:49.742 15.627 - 15.730: 94.8387% ( 2) 00:10:49.742 15.730 - 15.833: 94.8848% ( 3) 00:10:49.742 15.833 - 15.936: 94.9770% ( 6) 00:10:49.742 15.936 - 16.039: 95.0845% ( 7) 00:10:49.742 16.039 - 16.141: 95.1459% ( 4) 00:10:49.742 16.141 - 16.244: 95.1767% ( 2) 00:10:49.742 16.244 - 16.347: 95.2074% ( 2) 00:10:49.742 16.347 - 16.450: 95.2227% ( 1) 00:10:49.742 16.450 - 16.553: 95.2535% ( 2) 00:10:49.742 16.553 - 16.655: 95.2842% ( 2) 00:10:49.742 16.655 - 16.758: 95.3303% ( 3) 00:10:49.742 16.861 - 16.964: 95.3610% ( 2) 00:10:49.742 17.067 - 17.169: 95.3763% ( 1) 00:10:49.742 17.169 - 17.272: 95.3917% ( 1) 00:10:49.742 17.375 - 17.478: 95.4378% ( 3) 00:10:49.742 17.478 - 17.581: 95.5607% ( 8) 00:10:49.742 17.581 - 17.684: 95.8679% ( 20) 00:10:49.742 17.684 - 17.786: 96.0215% ( 10) 00:10:49.742 17.786 - 17.889: 96.0676% ( 3) 00:10:49.742 17.889 - 17.992: 96.1444% ( 5) 00:10:49.742 17.992 - 18.095: 96.1905% ( 3) 00:10:49.742 18.095 - 18.198: 96.2058% ( 1) 00:10:49.742 18.198 - 18.300: 96.2519% ( 3) 00:10:49.742 18.300 - 18.403: 96.3134% ( 4) 00:10:49.742 18.506 - 18.609: 96.3287% ( 1) 00:10:49.742 18.609 - 18.712: 96.3748% ( 3) 00:10:49.742 18.917 - 19.020: 96.4055% ( 2) 00:10:49.742 19.020 - 19.123: 96.4209% ( 1) 00:10:49.742 19.123 - 19.226: 96.4823% ( 4) 00:10:49.742 19.226 - 19.329: 96.5438% ( 4) 00:10:49.742 19.329 - 19.431: 96.5899% ( 3) 00:10:49.742 19.431 - 19.534: 96.6820% ( 6) 00:10:49.742 19.534 - 19.637: 96.7281% ( 3) 00:10:49.742 19.637 - 19.740: 96.7896% ( 4) 00:10:49.742 19.740 - 19.843: 96.8356% ( 3) 00:10:49.742 19.843 - 19.945: 96.8510% ( 1) 00:10:49.742 20.151 - 20.254: 96.8971% ( 3) 00:10:49.742 20.459 - 20.562: 96.9278% ( 2) 00:10:49.742 20.562 - 20.665: 96.9432% ( 1) 00:10:49.742 20.768 - 20.871: 96.9585% ( 1) 00:10:49.742 20.871 - 20.973: 97.0353% ( 5) 00:10:49.742 20.973 - 21.076: 97.0968% ( 4) 00:10:49.742 21.179 - 21.282: 97.1275% ( 2) 00:10:49.742 21.282 - 21.385: 97.1582% ( 2) 00:10:49.743 21.488 - 21.590: 97.1889% ( 2) 00:10:49.743 21.590 - 21.693: 97.2043% ( 1) 00:10:49.743 21.693 - 21.796: 97.2504% ( 3) 00:10:49.743 21.899 - 22.002: 97.2811% ( 2) 00:10:49.743 22.002 - 22.104: 97.3579% ( 5) 00:10:49.743 22.104 - 22.207: 97.3886% ( 2) 00:10:49.743 22.207 - 22.310: 97.4808% ( 6) 00:10:49.743 22.310 - 22.413: 97.5576% ( 5) 00:10:49.743 22.413 - 22.516: 97.6037% ( 3) 00:10:49.743 22.516 - 22.618: 97.6190% ( 1) 00:10:49.743 22.618 - 22.721: 97.6498% ( 2) 00:10:49.743 22.721 - 22.824: 97.7266% ( 5) 00:10:49.743 22.927 - 23.030: 97.7727% ( 3) 00:10:49.743 23.030 - 23.133: 97.7880% ( 1) 00:10:49.743 23.133 - 23.235: 97.8187% ( 2) 00:10:49.743 23.235 - 23.338: 97.8495% ( 2) 00:10:49.743 23.441 - 23.544: 97.8648% ( 1) 00:10:49.743 23.749 - 23.852: 97.8802% ( 1) 00:10:49.743 24.161 - 24.263: 97.8955% ( 1) 00:10:49.743 24.263 - 24.366: 97.9109% ( 1) 00:10:49.743 24.366 - 24.469: 97.9263% ( 1) 00:10:49.743 24.469 - 24.572: 97.9416% ( 1) 00:10:49.743 24.675 - 24.778: 97.9570% ( 1) 00:10:49.743 24.778 - 24.880: 97.9724% ( 1) 00:10:49.743 24.880 - 24.983: 98.0031% ( 2) 00:10:49.743 24.983 - 25.086: 98.0799% ( 5) 00:10:49.743 25.086 - 25.189: 98.0952% ( 1) 00:10:49.743 25.189 - 25.292: 98.1567% ( 4) 00:10:49.743 25.292 - 25.394: 98.1874% ( 2) 00:10:49.743 25.394 - 25.497: 98.2028% ( 1) 00:10:49.743 25.497 - 25.600: 98.2181% ( 1) 00:10:49.743 25.600 - 25.703: 98.2796% ( 4) 00:10:49.743 25.703 - 25.806: 98.2949% ( 1) 00:10:49.743 25.806 - 25.908: 98.3103% ( 1) 00:10:49.743 25.908 - 26.011: 98.4178% ( 7) 00:10:49.743 26.011 - 26.114: 98.6329% ( 14) 00:10:49.743 26.114 - 26.217: 98.8786% ( 16) 00:10:49.743 26.217 - 26.320: 98.9862% ( 7) 00:10:49.743 26.320 - 26.525: 99.0937% ( 7) 00:10:49.743 26.525 - 26.731: 99.2012% ( 7) 00:10:49.743 26.731 - 26.937: 99.2934% ( 6) 00:10:49.743 26.937 - 27.142: 99.3856% ( 6) 00:10:49.743 27.142 - 27.348: 99.4163% ( 2) 00:10:49.743 27.553 - 27.759: 99.4316% ( 1) 00:10:49.743 27.965 - 28.170: 99.4624% ( 2) 00:10:49.743 28.376 - 28.582: 99.4777% ( 1) 00:10:49.743 28.582 - 28.787: 99.4931% ( 1) 00:10:49.743 28.787 - 28.993: 99.5238% ( 2) 00:10:49.743 28.993 - 29.198: 99.5545% ( 2) 00:10:49.743 29.404 - 29.610: 99.5853% ( 2) 00:10:49.743 29.610 - 29.815: 99.6006% ( 1) 00:10:49.743 30.021 - 30.227: 99.6160% ( 1) 00:10:49.743 30.227 - 30.432: 99.6313% ( 1) 00:10:49.743 31.049 - 31.255: 99.6467% ( 1) 00:10:49.743 31.460 - 31.666: 99.6621% ( 1) 00:10:49.743 31.666 - 31.871: 99.7081% ( 3) 00:10:49.743 32.283 - 32.488: 99.7235% ( 1) 00:10:49.743 33.105 - 33.311: 99.7389% ( 1) 00:10:49.743 33.928 - 34.133: 99.7542% ( 1) 00:10:49.743 36.601 - 36.806: 99.7696% ( 1) 00:10:49.743 37.629 - 37.835: 99.7849% ( 1) 00:10:49.743 37.835 - 38.040: 99.8003% ( 1) 00:10:49.743 40.508 - 40.713: 99.8157% ( 1) 00:10:49.743 41.741 - 41.947: 99.8310% ( 1) 00:10:49.743 43.592 - 43.798: 99.8464% ( 1) 00:10:49.743 44.826 - 45.031: 99.8618% ( 1) 00:10:49.743 46.059 - 46.265: 99.8925% ( 2) 00:10:49.743 46.676 - 46.882: 99.9078% ( 1) 00:10:49.743 51.406 - 51.611: 99.9232% ( 1) 00:10:49.743 74.847 - 75.258: 99.9386% ( 1) 00:10:49.743 95.820 - 96.231: 99.9539% ( 1) 00:10:49.743 115.971 - 116.794: 99.9693% ( 1) 00:10:49.743 134.066 - 134.888: 99.9846% ( 1) 00:10:49.743 162.853 - 163.676: 100.0000% ( 1) 00:10:49.743 00:10:49.743 ************************************ 00:10:49.743 END TEST nvme_overhead 00:10:49.743 ************************************ 00:10:49.743 00:10:49.743 real 0m1.283s 00:10:49.743 user 0m1.086s 00:10:49.743 sys 0m0.141s 00:10:49.743 20:17:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:49.743 20:17:19 -- common/autotest_common.sh@10 -- # set +x 00:10:49.743 20:17:19 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:49.743 20:17:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:10:49.743 20:17:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:49.743 20:17:19 -- common/autotest_common.sh@10 -- # set +x 00:10:49.743 ************************************ 00:10:49.743 START TEST nvme_arbitration 00:10:49.743 ************************************ 00:10:49.743 20:17:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:53.932 Initializing NVMe Controllers 00:10:53.932 Attached to 0000:00:10.0 00:10:53.932 Attached to 0000:00:11.0 00:10:53.932 Attached to 0000:00:13.0 00:10:53.932 Attached to 0000:00:12.0 00:10:53.932 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:53.932 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:53.932 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:53.932 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:53.932 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:53.932 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:53.932 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:53.932 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:53.932 Initialization complete. Launching workers. 00:10:53.932 Starting thread on core 1 with urgent priority queue 00:10:53.932 Starting thread on core 2 with urgent priority queue 00:10:53.932 Starting thread on core 3 with urgent priority queue 00:10:53.932 Starting thread on core 0 with urgent priority queue 00:10:53.932 QEMU NVMe Ctrl (12340 ) core 0: 490.67 IO/s 203.80 secs/100000 ios 00:10:53.932 QEMU NVMe Ctrl (12342 ) core 0: 490.67 IO/s 203.80 secs/100000 ios 00:10:53.932 QEMU NVMe Ctrl (12341 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:10:53.932 QEMU NVMe Ctrl (12342 ) core 1: 512.00 IO/s 195.31 secs/100000 ios 00:10:53.932 QEMU NVMe Ctrl (12343 ) core 2: 576.00 IO/s 173.61 secs/100000 ios 00:10:53.932 QEMU NVMe Ctrl (12342 ) core 3: 554.67 IO/s 180.29 secs/100000 ios 00:10:53.932 ======================================================== 00:10:53.932 00:10:53.932 00:10:53.932 real 0m3.435s 00:10:53.932 user 0m9.450s 00:10:53.932 sys 0m0.150s 00:10:53.932 ************************************ 00:10:53.932 END TEST nvme_arbitration 00:10:53.932 ************************************ 00:10:53.932 20:17:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:53.932 20:17:23 -- common/autotest_common.sh@10 -- # set +x 00:10:53.932 20:17:23 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:53.932 20:17:23 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:10:53.932 20:17:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.932 20:17:23 -- common/autotest_common.sh@10 -- # set +x 00:10:53.932 ************************************ 00:10:53.932 START TEST nvme_single_aen 00:10:53.932 ************************************ 00:10:53.932 20:17:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:53.932 Asynchronous Event Request test 00:10:53.932 Attached to 0000:00:10.0 00:10:53.932 Attached to 0000:00:11.0 00:10:53.932 Attached to 0000:00:13.0 00:10:53.932 Attached to 0000:00:12.0 00:10:53.932 Reset controller to setup AER completions for this process 00:10:53.932 Registering asynchronous event callbacks... 00:10:53.932 Getting orig temperature thresholds of all controllers 00:10:53.932 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:53.932 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:53.932 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:53.932 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:53.932 Setting all controllers temperature threshold low to trigger AER 00:10:53.932 Waiting for all controllers temperature threshold to be set lower 00:10:53.932 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:53.932 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:53.932 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:53.932 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:53.932 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:53.932 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:53.932 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:53.932 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:53.932 Waiting for all controllers to trigger AER and reset threshold 00:10:53.932 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.932 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.932 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.932 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:53.932 Cleaning up... 00:10:53.932 00:10:53.932 real 0m0.264s 00:10:53.932 user 0m0.102s 00:10:53.932 sys 0m0.123s 00:10:53.932 ************************************ 00:10:53.932 END TEST nvme_single_aen 00:10:53.932 ************************************ 00:10:53.932 20:17:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:53.932 20:17:23 -- common/autotest_common.sh@10 -- # set +x 00:10:53.932 20:17:23 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:53.932 20:17:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:53.932 20:17:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.932 20:17:23 -- common/autotest_common.sh@10 -- # set +x 00:10:53.932 ************************************ 00:10:53.932 START TEST nvme_doorbell_aers 00:10:53.932 ************************************ 00:10:53.932 20:17:23 -- common/autotest_common.sh@1111 -- # nvme_doorbell_aers 00:10:53.932 20:17:23 -- nvme/nvme.sh@70 -- # bdfs=() 00:10:53.932 20:17:23 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:53.932 20:17:23 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:53.932 20:17:23 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:53.932 20:17:23 -- common/autotest_common.sh@1499 -- # bdfs=() 00:10:53.932 20:17:23 -- common/autotest_common.sh@1499 -- # local bdfs 00:10:53.932 20:17:23 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:53.932 20:17:23 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:53.932 20:17:23 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:10:53.932 20:17:23 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:10:53.932 20:17:23 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:53.932 20:17:23 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:53.932 20:17:23 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:54.191 [2024-04-24 20:17:24.250485] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:04.170 Executing: test_write_invalid_db 00:11:04.170 Waiting for AER completion... 00:11:04.170 Failure: test_write_invalid_db 00:11:04.170 00:11:04.170 Executing: test_invalid_db_write_overflow_sq 00:11:04.170 Waiting for AER completion... 00:11:04.170 Failure: test_invalid_db_write_overflow_sq 00:11:04.170 00:11:04.170 Executing: test_invalid_db_write_overflow_cq 00:11:04.170 Waiting for AER completion... 00:11:04.170 Failure: test_invalid_db_write_overflow_cq 00:11:04.170 00:11:04.170 20:17:34 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:04.170 20:17:34 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:04.170 [2024-04-24 20:17:34.296136] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:14.151 Executing: test_write_invalid_db 00:11:14.151 Waiting for AER completion... 00:11:14.151 Failure: test_write_invalid_db 00:11:14.151 00:11:14.151 Executing: test_invalid_db_write_overflow_sq 00:11:14.151 Waiting for AER completion... 00:11:14.151 Failure: test_invalid_db_write_overflow_sq 00:11:14.151 00:11:14.151 Executing: test_invalid_db_write_overflow_cq 00:11:14.151 Waiting for AER completion... 00:11:14.151 Failure: test_invalid_db_write_overflow_cq 00:11:14.151 00:11:14.151 20:17:44 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:14.151 20:17:44 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:14.151 [2024-04-24 20:17:44.333904] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:24.146 Executing: test_write_invalid_db 00:11:24.147 Waiting for AER completion... 00:11:24.147 Failure: test_write_invalid_db 00:11:24.147 00:11:24.147 Executing: test_invalid_db_write_overflow_sq 00:11:24.147 Waiting for AER completion... 00:11:24.147 Failure: test_invalid_db_write_overflow_sq 00:11:24.147 00:11:24.147 Executing: test_invalid_db_write_overflow_cq 00:11:24.147 Waiting for AER completion... 00:11:24.147 Failure: test_invalid_db_write_overflow_cq 00:11:24.147 00:11:24.147 20:17:54 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:24.147 20:17:54 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:24.406 [2024-04-24 20:17:54.410705] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:34.385 Executing: test_write_invalid_db 00:11:34.385 Waiting for AER completion... 00:11:34.385 Failure: test_write_invalid_db 00:11:34.385 00:11:34.385 Executing: test_invalid_db_write_overflow_sq 00:11:34.385 Waiting for AER completion... 00:11:34.385 Failure: test_invalid_db_write_overflow_sq 00:11:34.385 00:11:34.385 Executing: test_invalid_db_write_overflow_cq 00:11:34.385 Waiting for AER completion... 00:11:34.385 Failure: test_invalid_db_write_overflow_cq 00:11:34.385 00:11:34.385 00:11:34.385 real 0m40.313s 00:11:34.385 user 0m29.190s 00:11:34.385 sys 0m10.733s 00:11:34.385 20:18:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:34.385 20:18:04 -- common/autotest_common.sh@10 -- # set +x 00:11:34.385 ************************************ 00:11:34.385 END TEST nvme_doorbell_aers 00:11:34.385 ************************************ 00:11:34.385 20:18:04 -- nvme/nvme.sh@97 -- # uname 00:11:34.385 20:18:04 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:11:34.385 20:18:04 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:34.385 20:18:04 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:11:34.385 20:18:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:34.385 20:18:04 -- common/autotest_common.sh@10 -- # set +x 00:11:34.385 ************************************ 00:11:34.385 START TEST nvme_multi_aen 00:11:34.385 ************************************ 00:11:34.385 20:18:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:11:34.385 [2024-04-24 20:18:04.588790] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:34.385 [2024-04-24 20:18:04.588890] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:34.385 [2024-04-24 20:18:04.588917] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:34.385 [2024-04-24 20:18:04.590449] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:34.385 [2024-04-24 20:18:04.590484] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:34.385 [2024-04-24 20:18:04.590506] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:34.385 [2024-04-24 20:18:04.591843] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:34.385 [2024-04-24 20:18:04.592035] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:34.385 [2024-04-24 20:18:04.592141] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:34.385 [2024-04-24 20:18:04.593471] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:34.385 [2024-04-24 20:18:04.593696] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:34.385 [2024-04-24 20:18:04.593839] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70110) is not found. Dropping the request. 00:11:34.385 Child process pid: 70635 00:11:34.644 [Child] Asynchronous Event Request test 00:11:34.644 [Child] Attached to 0000:00:10.0 00:11:34.644 [Child] Attached to 0000:00:11.0 00:11:34.644 [Child] Attached to 0000:00:13.0 00:11:34.644 [Child] Attached to 0000:00:12.0 00:11:34.644 [Child] Registering asynchronous event callbacks... 00:11:34.644 [Child] Getting orig temperature thresholds of all controllers 00:11:34.644 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:34.644 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:34.644 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:34.644 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:34.644 [Child] Waiting for all controllers to trigger AER and reset threshold 00:11:34.644 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:34.644 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:34.644 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:34.644 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:34.644 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:34.644 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:34.644 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:34.644 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:34.644 [Child] Cleaning up... 00:11:34.902 Asynchronous Event Request test 00:11:34.902 Attached to 0000:00:10.0 00:11:34.902 Attached to 0000:00:11.0 00:11:34.902 Attached to 0000:00:13.0 00:11:34.902 Attached to 0000:00:12.0 00:11:34.902 Reset controller to setup AER completions for this process 00:11:34.902 Registering asynchronous event callbacks... 00:11:34.902 Getting orig temperature thresholds of all controllers 00:11:34.902 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:34.902 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:34.902 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:34.902 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:34.902 Setting all controllers temperature threshold low to trigger AER 00:11:34.902 Waiting for all controllers temperature threshold to be set lower 00:11:34.902 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:34.902 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:34.902 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:34.902 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:34.902 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:34.902 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:34.902 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:34.902 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:34.902 Waiting for all controllers to trigger AER and reset threshold 00:11:34.902 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:34.902 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:34.902 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:34.902 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:34.902 Cleaning up... 00:11:34.902 00:11:34.902 real 0m0.603s 00:11:34.902 user 0m0.196s 00:11:34.902 sys 0m0.297s 00:11:34.902 20:18:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:34.902 20:18:04 -- common/autotest_common.sh@10 -- # set +x 00:11:34.902 ************************************ 00:11:34.902 END TEST nvme_multi_aen 00:11:34.902 ************************************ 00:11:34.902 20:18:05 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:34.902 20:18:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:34.902 20:18:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:34.902 20:18:05 -- common/autotest_common.sh@10 -- # set +x 00:11:34.902 ************************************ 00:11:34.902 START TEST nvme_startup 00:11:34.902 ************************************ 00:11:34.902 20:18:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:11:35.160 Initializing NVMe Controllers 00:11:35.160 Attached to 0000:00:10.0 00:11:35.160 Attached to 0000:00:11.0 00:11:35.160 Attached to 0000:00:13.0 00:11:35.160 Attached to 0000:00:12.0 00:11:35.160 Initialization complete. 00:11:35.160 Time used:180980.359 (us). 00:11:35.160 00:11:35.161 real 0m0.276s 00:11:35.161 user 0m0.101s 00:11:35.161 sys 0m0.130s 00:11:35.161 20:18:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:35.161 ************************************ 00:11:35.161 END TEST nvme_startup 00:11:35.161 ************************************ 00:11:35.161 20:18:05 -- common/autotest_common.sh@10 -- # set +x 00:11:35.419 20:18:05 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:11:35.419 20:18:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:35.419 20:18:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:35.419 20:18:05 -- common/autotest_common.sh@10 -- # set +x 00:11:35.419 ************************************ 00:11:35.419 START TEST nvme_multi_secondary 00:11:35.419 ************************************ 00:11:35.419 20:18:05 -- common/autotest_common.sh@1111 -- # nvme_multi_secondary 00:11:35.419 20:18:05 -- nvme/nvme.sh@52 -- # pid0=70699 00:11:35.419 20:18:05 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:11:35.419 20:18:05 -- nvme/nvme.sh@54 -- # pid1=70700 00:11:35.419 20:18:05 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:11:35.419 20:18:05 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:39.637 Initializing NVMe Controllers 00:11:39.637 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:39.637 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:39.637 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:39.637 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:39.637 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:39.637 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:39.637 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:39.637 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:39.637 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:39.637 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:39.637 Initialization complete. Launching workers. 00:11:39.637 ======================================================== 00:11:39.637 Latency(us) 00:11:39.637 Device Information : IOPS MiB/s Average min max 00:11:39.637 PCIE (0000:00:10.0) NSID 1 from core 1: 4814.01 18.80 3321.12 1355.91 7772.93 00:11:39.637 PCIE (0000:00:11.0) NSID 1 from core 1: 4814.01 18.80 3323.20 1396.22 7622.56 00:11:39.637 PCIE (0000:00:13.0) NSID 1 from core 1: 4814.01 18.80 3323.39 1309.98 6941.57 00:11:39.637 PCIE (0000:00:12.0) NSID 1 from core 1: 4814.01 18.80 3324.06 1196.94 7121.47 00:11:39.637 PCIE (0000:00:12.0) NSID 2 from core 1: 4814.01 18.80 3324.11 1215.76 7289.24 00:11:39.637 PCIE (0000:00:12.0) NSID 3 from core 1: 4814.01 18.80 3324.18 1336.42 7641.71 00:11:39.637 ======================================================== 00:11:39.637 Total : 28884.09 112.83 3323.34 1196.94 7772.93 00:11:39.637 00:11:39.637 Initializing NVMe Controllers 00:11:39.637 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:39.637 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:39.637 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:39.637 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:39.637 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:39.637 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:39.637 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:39.637 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:39.637 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:39.637 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:39.637 Initialization complete. Launching workers. 00:11:39.637 ======================================================== 00:11:39.637 Latency(us) 00:11:39.637 Device Information : IOPS MiB/s Average min max 00:11:39.637 PCIE (0000:00:10.0) NSID 1 from core 2: 3150.96 12.31 5073.93 1212.12 13894.39 00:11:39.637 PCIE (0000:00:11.0) NSID 1 from core 2: 3150.96 12.31 5069.58 1341.82 13877.85 00:11:39.637 PCIE (0000:00:13.0) NSID 1 from core 2: 3150.96 12.31 5070.35 1231.11 14146.67 00:11:39.637 PCIE (0000:00:12.0) NSID 1 from core 2: 3150.96 12.31 5069.80 1217.89 15402.24 00:11:39.637 PCIE (0000:00:12.0) NSID 2 from core 2: 3150.96 12.31 5070.07 1183.02 13949.57 00:11:39.637 PCIE (0000:00:12.0) NSID 3 from core 2: 3150.96 12.31 5069.99 1186.17 13779.06 00:11:39.637 ======================================================== 00:11:39.637 Total : 18905.76 73.85 5070.62 1183.02 15402.24 00:11:39.637 00:11:39.637 20:18:09 -- nvme/nvme.sh@56 -- # wait 70699 00:11:40.670 Initializing NVMe Controllers 00:11:40.670 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:40.670 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:40.670 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:40.670 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:40.670 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:40.670 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:40.670 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:40.670 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:40.670 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:40.670 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:40.670 Initialization complete. Launching workers. 00:11:40.670 ======================================================== 00:11:40.670 Latency(us) 00:11:40.670 Device Information : IOPS MiB/s Average min max 00:11:40.670 PCIE (0000:00:10.0) NSID 1 from core 0: 7880.90 30.78 2028.51 908.85 8806.89 00:11:40.670 PCIE (0000:00:11.0) NSID 1 from core 0: 7880.90 30.78 2029.71 942.96 8823.36 00:11:40.670 PCIE (0000:00:13.0) NSID 1 from core 0: 7880.90 30.78 2029.71 942.64 7936.49 00:11:40.670 PCIE (0000:00:12.0) NSID 1 from core 0: 7880.90 30.78 2029.68 939.62 7893.27 00:11:40.670 PCIE (0000:00:12.0) NSID 2 from core 0: 7880.90 30.78 2029.64 887.88 7875.64 00:11:40.670 PCIE (0000:00:12.0) NSID 3 from core 0: 7884.10 30.80 2028.79 814.25 7813.50 00:11:40.671 ======================================================== 00:11:40.671 Total : 47288.60 184.72 2029.34 814.25 8823.36 00:11:40.671 00:11:40.671 20:18:10 -- nvme/nvme.sh@57 -- # wait 70700 00:11:40.671 20:18:10 -- nvme/nvme.sh@61 -- # pid0=70775 00:11:40.671 20:18:10 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:11:40.671 20:18:10 -- nvme/nvme.sh@63 -- # pid1=70776 00:11:40.671 20:18:10 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:11:40.671 20:18:10 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:11:43.954 Initializing NVMe Controllers 00:11:43.954 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:43.954 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:43.954 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:43.954 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:43.954 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:11:43.954 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:11:43.954 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:11:43.954 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:11:43.954 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:11:43.954 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:11:43.954 Initialization complete. Launching workers. 00:11:43.954 ======================================================== 00:11:43.954 Latency(us) 00:11:43.954 Device Information : IOPS MiB/s Average min max 00:11:43.954 PCIE (0000:00:10.0) NSID 1 from core 1: 5406.07 21.12 2957.42 934.73 6112.40 00:11:43.954 PCIE (0000:00:11.0) NSID 1 from core 1: 5406.07 21.12 2959.31 972.10 6112.33 00:11:43.954 PCIE (0000:00:13.0) NSID 1 from core 1: 5406.07 21.12 2959.51 965.23 6237.70 00:11:43.954 PCIE (0000:00:12.0) NSID 1 from core 1: 5406.07 21.12 2959.74 972.35 6283.52 00:11:43.954 PCIE (0000:00:12.0) NSID 2 from core 1: 5406.07 21.12 2959.84 966.50 6389.65 00:11:43.954 PCIE (0000:00:12.0) NSID 3 from core 1: 5411.40 21.14 2957.11 962.76 6082.00 00:11:43.954 ======================================================== 00:11:43.954 Total : 32441.73 126.73 2958.82 934.73 6389.65 00:11:43.954 00:11:44.213 Initializing NVMe Controllers 00:11:44.213 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:44.213 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:44.213 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:44.213 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:44.213 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:44.213 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:44.213 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:44.213 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:44.213 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:44.213 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:44.213 Initialization complete. Launching workers. 00:11:44.213 ======================================================== 00:11:44.213 Latency(us) 00:11:44.213 Device Information : IOPS MiB/s Average min max 00:11:44.213 PCIE (0000:00:10.0) NSID 1 from core 0: 5198.76 20.31 3075.13 968.29 6077.75 00:11:44.213 PCIE (0000:00:11.0) NSID 1 from core 0: 5198.76 20.31 3077.20 1001.00 6372.27 00:11:44.213 PCIE (0000:00:13.0) NSID 1 from core 0: 5198.76 20.31 3077.19 1002.57 7746.51 00:11:44.213 PCIE (0000:00:12.0) NSID 1 from core 0: 5198.76 20.31 3077.38 1004.53 7574.19 00:11:44.213 PCIE (0000:00:12.0) NSID 2 from core 0: 5198.76 20.31 3077.32 991.52 6408.44 00:11:44.213 PCIE (0000:00:12.0) NSID 3 from core 0: 5198.76 20.31 3077.28 985.11 6142.74 00:11:44.213 ======================================================== 00:11:44.213 Total : 31192.53 121.85 3076.92 968.29 7746.51 00:11:44.213 00:11:46.834 Initializing NVMe Controllers 00:11:46.834 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:46.834 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:46.834 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:46.834 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:46.834 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:46.834 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:46.834 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:46.834 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:46.834 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:46.834 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:46.834 Initialization complete. Launching workers. 00:11:46.834 ======================================================== 00:11:46.834 Latency(us) 00:11:46.834 Device Information : IOPS MiB/s Average min max 00:11:46.834 PCIE (0000:00:10.0) NSID 1 from core 2: 3224.97 12.60 4959.87 1017.24 11949.06 00:11:46.834 PCIE (0000:00:11.0) NSID 1 from core 2: 3224.97 12.60 4961.09 1039.23 12299.95 00:11:46.834 PCIE (0000:00:13.0) NSID 1 from core 2: 3224.97 12.60 4961.02 1052.31 12869.28 00:11:46.834 PCIE (0000:00:12.0) NSID 1 from core 2: 3224.97 12.60 4958.68 1053.84 12911.35 00:11:46.834 PCIE (0000:00:12.0) NSID 2 from core 2: 3224.97 12.60 4956.82 1046.53 12696.19 00:11:46.834 PCIE (0000:00:12.0) NSID 3 from core 2: 3224.97 12.60 4956.73 1022.20 12644.92 00:11:46.834 ======================================================== 00:11:46.834 Total : 19349.84 75.59 4959.03 1017.24 12911.35 00:11:46.834 00:11:46.834 ************************************ 00:11:46.834 END TEST nvme_multi_secondary 00:11:46.834 ************************************ 00:11:46.834 20:18:16 -- nvme/nvme.sh@65 -- # wait 70775 00:11:46.834 20:18:16 -- nvme/nvme.sh@66 -- # wait 70776 00:11:46.834 00:11:46.834 real 0m10.957s 00:11:46.834 user 0m18.558s 00:11:46.834 sys 0m0.966s 00:11:46.834 20:18:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:46.834 20:18:16 -- common/autotest_common.sh@10 -- # set +x 00:11:46.834 20:18:16 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:46.834 20:18:16 -- nvme/nvme.sh@102 -- # kill_stub 00:11:46.834 20:18:16 -- common/autotest_common.sh@1075 -- # [[ -e /proc/69636 ]] 00:11:46.834 20:18:16 -- common/autotest_common.sh@1076 -- # kill 69636 00:11:46.834 20:18:16 -- common/autotest_common.sh@1077 -- # wait 69636 00:11:46.834 [2024-04-24 20:18:16.561294] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.561424] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.561537] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.561592] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.567963] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.568061] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.568115] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.568157] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.572551] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.572618] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.572652] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.572679] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.576482] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.576551] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.576592] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 [2024-04-24 20:18:16.576619] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 70634) is not found. Dropping the request. 00:11:46.834 20:18:16 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:11:46.834 20:18:16 -- common/autotest_common.sh@1083 -- # echo 2 00:11:46.834 20:18:16 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:46.834 20:18:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:46.834 20:18:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:46.834 20:18:16 -- common/autotest_common.sh@10 -- # set +x 00:11:46.834 ************************************ 00:11:46.834 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:46.834 ************************************ 00:11:46.834 20:18:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:47.092 * Looking for test storage... 00:11:47.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:47.092 20:18:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:47.092 20:18:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:47.092 20:18:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:47.092 20:18:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:47.092 20:18:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:47.092 20:18:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:47.092 20:18:17 -- common/autotest_common.sh@1510 -- # bdfs=() 00:11:47.092 20:18:17 -- common/autotest_common.sh@1510 -- # local bdfs 00:11:47.092 20:18:17 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:11:47.092 20:18:17 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:11:47.092 20:18:17 -- common/autotest_common.sh@1499 -- # bdfs=() 00:11:47.092 20:18:17 -- common/autotest_common.sh@1499 -- # local bdfs 00:11:47.092 20:18:17 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:47.092 20:18:17 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:47.092 20:18:17 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:11:47.092 20:18:17 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:11:47.092 20:18:17 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:47.092 20:18:17 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:11:47.092 20:18:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:47.092 20:18:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:47.092 20:18:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=70936 00:11:47.092 20:18:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:47.092 20:18:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 70936 00:11:47.092 20:18:17 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:47.092 20:18:17 -- common/autotest_common.sh@817 -- # '[' -z 70936 ']' 00:11:47.092 20:18:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.092 20:18:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:47.093 20:18:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.093 20:18:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:47.093 20:18:17 -- common/autotest_common.sh@10 -- # set +x 00:11:47.093 [2024-04-24 20:18:17.303880] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:11:47.093 [2024-04-24 20:18:17.304151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70936 ] 00:11:47.351 [2024-04-24 20:18:17.494106] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.609 [2024-04-24 20:18:17.743123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.609 [2024-04-24 20:18:17.743345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.609 [2024-04-24 20:18:17.743530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.609 [2024-04-24 20:18:17.743506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.546 20:18:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:48.546 20:18:18 -- common/autotest_common.sh@850 -- # return 0 00:11:48.546 20:18:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:48.546 20:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.546 20:18:18 -- common/autotest_common.sh@10 -- # set +x 00:11:48.546 nvme0n1 00:11:48.546 20:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.546 20:18:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:48.546 20:18:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_ZsZxn.txt 00:11:48.546 20:18:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:48.546 20:18:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.546 20:18:18 -- common/autotest_common.sh@10 -- # set +x 00:11:48.546 true 00:11:48.546 20:18:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.546 20:18:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:48.546 20:18:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1713989898 00:11:48.546 20:18:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=70964 00:11:48.546 20:18:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:48.546 20:18:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:48.546 20:18:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:51.083 20:18:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.083 20:18:20 -- common/autotest_common.sh@10 -- # set +x 00:11:51.083 [2024-04-24 20:18:20.784046] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:11:51.083 [2024-04-24 20:18:20.784370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:51.083 [2024-04-24 20:18:20.784400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:51.083 [2024-04-24 20:18:20.784417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.083 [2024-04-24 20:18:20.786337] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:51.083 20:18:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 70964 00:11:51.083 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 70964 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 70964 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:51.083 20:18:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:51.083 20:18:20 -- common/autotest_common.sh@10 -- # set +x 00:11:51.083 20:18:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_ZsZxn.txt 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_ZsZxn.txt 00:11:51.083 20:18:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 70936 00:11:51.083 20:18:20 -- common/autotest_common.sh@936 -- # '[' -z 70936 ']' 00:11:51.083 20:18:20 -- common/autotest_common.sh@940 -- # kill -0 70936 00:11:51.083 20:18:20 -- common/autotest_common.sh@941 -- # uname 00:11:51.083 20:18:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:51.083 20:18:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70936 00:11:51.083 killing process with pid 70936 00:11:51.084 20:18:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:51.084 20:18:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:51.084 20:18:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70936' 00:11:51.084 20:18:20 -- common/autotest_common.sh@955 -- # kill 70936 00:11:51.084 20:18:20 -- common/autotest_common.sh@960 -- # wait 70936 00:11:53.620 20:18:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:53.620 20:18:23 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:53.620 00:11:53.620 real 0m6.465s 00:11:53.620 user 0m21.854s 00:11:53.620 sys 0m0.734s 00:11:53.620 20:18:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:53.620 ************************************ 00:11:53.620 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:53.620 20:18:23 -- common/autotest_common.sh@10 -- # set +x 00:11:53.620 ************************************ 00:11:53.620 20:18:23 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:53.620 20:18:23 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:53.620 20:18:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:53.620 20:18:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:53.620 20:18:23 -- common/autotest_common.sh@10 -- # set +x 00:11:53.620 ************************************ 00:11:53.620 START TEST nvme_fio 00:11:53.620 ************************************ 00:11:53.620 20:18:23 -- common/autotest_common.sh@1111 -- # nvme_fio_test 00:11:53.620 20:18:23 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:53.620 20:18:23 -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:53.620 20:18:23 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:53.620 20:18:23 -- common/autotest_common.sh@1499 -- # bdfs=() 00:11:53.620 20:18:23 -- common/autotest_common.sh@1499 -- # local bdfs 00:11:53.620 20:18:23 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:53.620 20:18:23 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:53.620 20:18:23 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:11:53.620 20:18:23 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:11:53.620 20:18:23 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:53.620 20:18:23 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:53.620 20:18:23 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:53.620 20:18:23 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:53.620 20:18:23 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:53.620 20:18:23 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:53.879 20:18:23 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:53.879 20:18:23 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:54.137 20:18:24 -- nvme/nvme.sh@41 -- # bs=4096 00:11:54.137 20:18:24 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:54.137 20:18:24 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:54.137 20:18:24 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:11:54.137 20:18:24 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:54.137 20:18:24 -- common/autotest_common.sh@1325 -- # local sanitizers 00:11:54.137 20:18:24 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:54.137 20:18:24 -- common/autotest_common.sh@1327 -- # shift 00:11:54.137 20:18:24 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:11:54.137 20:18:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:11:54.137 20:18:24 -- common/autotest_common.sh@1331 -- # grep libasan 00:11:54.137 20:18:24 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:54.137 20:18:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:11:54.137 20:18:24 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:54.137 20:18:24 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:54.137 20:18:24 -- common/autotest_common.sh@1333 -- # break 00:11:54.137 20:18:24 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:54.137 20:18:24 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:54.397 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:54.397 fio-3.35 00:11:54.397 Starting 1 thread 00:11:57.712 00:11:57.712 test: (groupid=0, jobs=1): err= 0: pid=71120: Wed Apr 24 20:18:27 2024 00:11:57.712 read: IOPS=21.1k, BW=82.3MiB/s (86.3MB/s)(165MiB/2001msec) 00:11:57.712 slat (nsec): min=3777, max=70070, avg=4930.93, stdev=1142.59 00:11:57.712 clat (usec): min=281, max=8161, avg=3033.67, stdev=347.79 00:11:57.712 lat (usec): min=286, max=8197, avg=3038.60, stdev=347.97 00:11:57.712 clat percentiles (usec): 00:11:57.712 | 1.00th=[ 2024], 5.00th=[ 2769], 10.00th=[ 2835], 20.00th=[ 2900], 00:11:57.712 | 30.00th=[ 2933], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:11:57.712 | 70.00th=[ 3064], 80.00th=[ 3130], 90.00th=[ 3425], 95.00th=[ 3589], 00:11:57.712 | 99.00th=[ 4080], 99.50th=[ 4621], 99.90th=[ 5932], 99.95th=[ 6718], 00:11:57.712 | 99.99th=[ 8094] 00:11:57.712 bw ( KiB/s): min=78840, max=87280, per=98.95%, avg=83408.00, stdev=4262.83, samples=3 00:11:57.712 iops : min=19710, max=21820, avg=20852.00, stdev=1065.71, samples=3 00:11:57.712 write: IOPS=20.9k, BW=81.8MiB/s (85.8MB/s)(164MiB/2001msec); 0 zone resets 00:11:57.712 slat (nsec): min=3908, max=36817, avg=5137.61, stdev=1142.62 00:11:57.712 clat (usec): min=211, max=8108, avg=3039.07, stdev=346.78 00:11:57.712 lat (usec): min=216, max=8118, avg=3044.20, stdev=346.98 00:11:57.712 clat percentiles (usec): 00:11:57.712 | 1.00th=[ 1991], 5.00th=[ 2769], 10.00th=[ 2835], 20.00th=[ 2900], 00:11:57.712 | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:11:57.712 | 70.00th=[ 3064], 80.00th=[ 3130], 90.00th=[ 3458], 95.00th=[ 3589], 00:11:57.712 | 99.00th=[ 4113], 99.50th=[ 4490], 99.90th=[ 5800], 99.95th=[ 6783], 00:11:57.712 | 99.99th=[ 7963] 00:11:57.712 bw ( KiB/s): min=78680, max=87576, per=99.61%, avg=83466.67, stdev=4486.51, samples=3 00:11:57.712 iops : min=19670, max=21894, avg=20866.67, stdev=1121.63, samples=3 00:11:57.712 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:57.712 lat (msec) : 2=0.94%, 4=97.56%, 10=1.47% 00:11:57.712 cpu : usr=99.30%, sys=0.10%, ctx=7, majf=0, minf=606 00:11:57.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:57.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:57.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:57.712 issued rwts: total=42169,41918,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:57.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:57.712 00:11:57.712 Run status group 0 (all jobs): 00:11:57.712 READ: bw=82.3MiB/s (86.3MB/s), 82.3MiB/s-82.3MiB/s (86.3MB/s-86.3MB/s), io=165MiB (173MB), run=2001-2001msec 00:11:57.712 WRITE: bw=81.8MiB/s (85.8MB/s), 81.8MiB/s-81.8MiB/s (85.8MB/s-85.8MB/s), io=164MiB (172MB), run=2001-2001msec 00:11:57.972 ----------------------------------------------------- 00:11:57.972 Suppressions used: 00:11:57.972 count bytes template 00:11:57.972 1 32 /usr/src/fio/parse.c 00:11:57.972 1 8 libtcmalloc_minimal.so 00:11:57.972 ----------------------------------------------------- 00:11:57.972 00:11:57.972 20:18:27 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:57.972 20:18:27 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:57.972 20:18:27 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:57.972 20:18:27 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:58.232 20:18:28 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:58.232 20:18:28 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:58.490 20:18:28 -- nvme/nvme.sh@41 -- # bs=4096 00:11:58.490 20:18:28 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:58.490 20:18:28 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:58.490 20:18:28 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:11:58.490 20:18:28 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:58.490 20:18:28 -- common/autotest_common.sh@1325 -- # local sanitizers 00:11:58.490 20:18:28 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:58.490 20:18:28 -- common/autotest_common.sh@1327 -- # shift 00:11:58.490 20:18:28 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:11:58.490 20:18:28 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:11:58.490 20:18:28 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:58.490 20:18:28 -- common/autotest_common.sh@1331 -- # grep libasan 00:11:58.490 20:18:28 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:11:58.490 20:18:28 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:58.490 20:18:28 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:58.490 20:18:28 -- common/autotest_common.sh@1333 -- # break 00:11:58.490 20:18:28 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:58.490 20:18:28 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:58.490 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:58.490 fio-3.35 00:11:58.490 Starting 1 thread 00:12:02.775 00:12:02.775 test: (groupid=0, jobs=1): err= 0: pid=71185: Wed Apr 24 20:18:32 2024 00:12:02.775 read: IOPS=20.7k, BW=80.9MiB/s (84.8MB/s)(162MiB/2001msec) 00:12:02.775 slat (nsec): min=3977, max=45000, avg=5048.97, stdev=1135.32 00:12:02.775 clat (usec): min=204, max=11292, avg=3077.24, stdev=303.92 00:12:02.775 lat (usec): min=208, max=11337, avg=3082.29, stdev=304.38 00:12:02.775 clat percentiles (usec): 00:12:02.775 | 1.00th=[ 2540], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2933], 00:12:02.775 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3097], 00:12:02.775 | 70.00th=[ 3130], 80.00th=[ 3195], 90.00th=[ 3294], 95.00th=[ 3359], 00:12:02.775 | 99.00th=[ 3851], 99.50th=[ 4359], 99.90th=[ 6063], 99.95th=[ 8717], 00:12:02.775 | 99.99th=[11076] 00:12:02.775 bw ( KiB/s): min=79184, max=84320, per=99.19%, avg=82168.00, stdev=2667.17, samples=3 00:12:02.775 iops : min=19796, max=21080, avg=20542.00, stdev=666.79, samples=3 00:12:02.775 write: IOPS=20.6k, BW=80.6MiB/s (84.6MB/s)(161MiB/2001msec); 0 zone resets 00:12:02.775 slat (nsec): min=4193, max=36650, avg=5225.59, stdev=1186.35 00:12:02.775 clat (usec): min=228, max=11142, avg=3082.68, stdev=312.23 00:12:02.775 lat (usec): min=233, max=11169, avg=3087.91, stdev=312.67 00:12:02.775 clat percentiles (usec): 00:12:02.775 | 1.00th=[ 2540], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2933], 00:12:02.775 | 30.00th=[ 2966], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3097], 00:12:02.775 | 70.00th=[ 3163], 80.00th=[ 3195], 90.00th=[ 3294], 95.00th=[ 3392], 00:12:02.775 | 99.00th=[ 3884], 99.50th=[ 4424], 99.90th=[ 6849], 99.95th=[ 8979], 00:12:02.775 | 99.99th=[10683] 00:12:02.775 bw ( KiB/s): min=79296, max=84336, per=99.57%, avg=82218.67, stdev=2614.73, samples=3 00:12:02.775 iops : min=19824, max=21084, avg=20554.67, stdev=653.68, samples=3 00:12:02.775 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:12:02.775 lat (msec) : 2=0.11%, 4=99.02%, 10=0.81%, 20=0.02% 00:12:02.775 cpu : usr=99.45%, sys=0.00%, ctx=23, majf=0, minf=605 00:12:02.775 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:02.775 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.775 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:02.775 issued rwts: total=41442,41307,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:02.775 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:02.775 00:12:02.775 Run status group 0 (all jobs): 00:12:02.775 READ: bw=80.9MiB/s (84.8MB/s), 80.9MiB/s-80.9MiB/s (84.8MB/s-84.8MB/s), io=162MiB (170MB), run=2001-2001msec 00:12:02.775 WRITE: bw=80.6MiB/s (84.6MB/s), 80.6MiB/s-80.6MiB/s (84.6MB/s-84.6MB/s), io=161MiB (169MB), run=2001-2001msec 00:12:02.775 ----------------------------------------------------- 00:12:02.775 Suppressions used: 00:12:02.775 count bytes template 00:12:02.775 1 32 /usr/src/fio/parse.c 00:12:02.775 1 8 libtcmalloc_minimal.so 00:12:02.775 ----------------------------------------------------- 00:12:02.775 00:12:02.775 20:18:32 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:02.775 20:18:32 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:02.775 20:18:32 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:02.775 20:18:32 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:02.775 20:18:32 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:02.775 20:18:32 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:02.775 20:18:32 -- nvme/nvme.sh@41 -- # bs=4096 00:12:02.775 20:18:32 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:02.775 20:18:32 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:02.775 20:18:32 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:12:02.775 20:18:32 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:02.775 20:18:32 -- common/autotest_common.sh@1325 -- # local sanitizers 00:12:02.775 20:18:32 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:02.775 20:18:32 -- common/autotest_common.sh@1327 -- # shift 00:12:02.775 20:18:32 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:12:02.775 20:18:32 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:12:02.775 20:18:32 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:02.775 20:18:32 -- common/autotest_common.sh@1331 -- # grep libasan 00:12:02.775 20:18:32 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:12:02.775 20:18:32 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:02.775 20:18:32 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:02.775 20:18:32 -- common/autotest_common.sh@1333 -- # break 00:12:02.775 20:18:32 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:02.775 20:18:32 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:03.033 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:03.033 fio-3.35 00:12:03.033 Starting 1 thread 00:12:07.307 00:12:07.307 test: (groupid=0, jobs=1): err= 0: pid=71246: Wed Apr 24 20:18:36 2024 00:12:07.307 read: IOPS=20.9k, BW=81.6MiB/s (85.6MB/s)(163MiB/2001msec) 00:12:07.307 slat (usec): min=3, max=202, avg= 5.07, stdev= 1.77 00:12:07.307 clat (usec): min=192, max=10705, avg=3050.34, stdev=547.22 00:12:07.307 lat (usec): min=196, max=10750, avg=3055.41, stdev=547.92 00:12:07.307 clat percentiles (usec): 00:12:07.307 | 1.00th=[ 2507], 5.00th=[ 2769], 10.00th=[ 2835], 20.00th=[ 2868], 00:12:07.307 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2933], 60.00th=[ 2966], 00:12:07.307 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3326], 95.00th=[ 3752], 00:12:07.307 | 99.00th=[ 5538], 99.50th=[ 7177], 99.90th=[ 8586], 99.95th=[ 9634], 00:12:07.307 | 99.99th=[10552] 00:12:07.307 bw ( KiB/s): min=80952, max=83736, per=98.38%, avg=82229.33, stdev=1406.10, samples=3 00:12:07.307 iops : min=20238, max=20934, avg=20557.33, stdev=351.52, samples=3 00:12:07.307 write: IOPS=20.8k, BW=81.3MiB/s (85.2MB/s)(163MiB/2001msec); 0 zone resets 00:12:07.307 slat (nsec): min=4063, max=53484, avg=5267.29, stdev=1606.70 00:12:07.307 clat (usec): min=363, max=10725, avg=3059.35, stdev=565.55 00:12:07.307 lat (usec): min=368, max=10730, avg=3064.62, stdev=566.31 00:12:07.307 clat percentiles (usec): 00:12:07.307 | 1.00th=[ 2540], 5.00th=[ 2802], 10.00th=[ 2835], 20.00th=[ 2868], 00:12:07.307 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2966], 00:12:07.307 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3359], 95.00th=[ 3752], 00:12:07.307 | 99.00th=[ 5866], 99.50th=[ 7504], 99.90th=[ 8586], 99.95th=[ 9241], 00:12:07.307 | 99.99th=[10421] 00:12:07.307 bw ( KiB/s): min=80832, max=83696, per=98.89%, avg=82285.33, stdev=1432.48, samples=3 00:12:07.307 iops : min=20208, max=20924, avg=20571.33, stdev=358.12, samples=3 00:12:07.307 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:12:07.307 lat (msec) : 2=0.36%, 4=96.67%, 10=2.90%, 20=0.03% 00:12:07.307 cpu : usr=99.30%, sys=0.05%, ctx=5, majf=0, minf=605 00:12:07.307 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:07.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:07.307 issued rwts: total=41813,41626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.307 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:07.307 00:12:07.307 Run status group 0 (all jobs): 00:12:07.307 READ: bw=81.6MiB/s (85.6MB/s), 81.6MiB/s-81.6MiB/s (85.6MB/s-85.6MB/s), io=163MiB (171MB), run=2001-2001msec 00:12:07.307 WRITE: bw=81.3MiB/s (85.2MB/s), 81.3MiB/s-81.3MiB/s (85.2MB/s-85.2MB/s), io=163MiB (171MB), run=2001-2001msec 00:12:07.307 ----------------------------------------------------- 00:12:07.307 Suppressions used: 00:12:07.307 count bytes template 00:12:07.307 1 32 /usr/src/fio/parse.c 00:12:07.307 1 8 libtcmalloc_minimal.so 00:12:07.307 ----------------------------------------------------- 00:12:07.307 00:12:07.307 20:18:36 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:07.307 20:18:36 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:07.307 20:18:36 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:07.307 20:18:36 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:07.307 20:18:37 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:07.307 20:18:37 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:07.308 20:18:37 -- nvme/nvme.sh@41 -- # bs=4096 00:12:07.308 20:18:37 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:07.308 20:18:37 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:07.308 20:18:37 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:12:07.308 20:18:37 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:07.308 20:18:37 -- common/autotest_common.sh@1325 -- # local sanitizers 00:12:07.308 20:18:37 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:07.308 20:18:37 -- common/autotest_common.sh@1327 -- # shift 00:12:07.308 20:18:37 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:12:07.308 20:18:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:12:07.308 20:18:37 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:07.308 20:18:37 -- common/autotest_common.sh@1331 -- # grep libasan 00:12:07.308 20:18:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:12:07.566 20:18:37 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:07.566 20:18:37 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:07.566 20:18:37 -- common/autotest_common.sh@1333 -- # break 00:12:07.566 20:18:37 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:07.566 20:18:37 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:07.566 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:07.566 fio-3.35 00:12:07.566 Starting 1 thread 00:12:14.189 00:12:14.189 test: (groupid=0, jobs=1): err= 0: pid=71307: Wed Apr 24 20:18:43 2024 00:12:14.189 read: IOPS=21.7k, BW=84.8MiB/s (89.0MB/s)(170MiB/2001msec) 00:12:14.189 slat (nsec): min=4096, max=37330, avg=4982.19, stdev=1056.64 00:12:14.189 clat (usec): min=204, max=10516, avg=2943.89, stdev=372.53 00:12:14.189 lat (usec): min=209, max=10553, avg=2948.88, stdev=373.03 00:12:14.189 clat percentiles (usec): 00:12:14.189 | 1.00th=[ 2606], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 00:12:14.189 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:12:14.189 | 70.00th=[ 2966], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3097], 00:12:14.189 | 99.00th=[ 4178], 99.50th=[ 5604], 99.90th=[ 8225], 99.95th=[ 8356], 00:12:14.189 | 99.99th=[10290] 00:12:14.189 bw ( KiB/s): min=85536, max=87976, per=100.00%, avg=87090.67, stdev=1350.71, samples=3 00:12:14.189 iops : min=21384, max=21994, avg=21772.67, stdev=337.68, samples=3 00:12:14.189 write: IOPS=21.6k, BW=84.2MiB/s (88.3MB/s)(168MiB/2001msec); 0 zone resets 00:12:14.189 slat (nsec): min=4215, max=55019, avg=5095.40, stdev=1116.26 00:12:14.189 clat (usec): min=377, max=10379, avg=2945.08, stdev=369.35 00:12:14.189 lat (usec): min=382, max=10397, avg=2950.18, stdev=369.87 00:12:14.189 clat percentiles (usec): 00:12:14.189 | 1.00th=[ 2606], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 00:12:14.189 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2900], 60.00th=[ 2933], 00:12:14.189 | 70.00th=[ 2966], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3064], 00:12:14.189 | 99.00th=[ 4293], 99.50th=[ 5538], 99.90th=[ 8225], 99.95th=[ 8586], 00:12:14.189 | 99.99th=[10028] 00:12:14.189 bw ( KiB/s): min=85264, max=88856, per=100.00%, avg=87256.00, stdev=1827.80, samples=3 00:12:14.189 iops : min=21316, max=22214, avg=21814.00, stdev=456.95, samples=3 00:12:14.189 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:12:14.189 lat (msec) : 2=0.12%, 4=98.78%, 10=1.05%, 20=0.01% 00:12:14.189 cpu : usr=99.40%, sys=0.10%, ctx=4, majf=0, minf=603 00:12:14.189 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:14.189 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.189 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:14.189 issued rwts: total=43457,43125,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.189 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:14.189 00:12:14.189 Run status group 0 (all jobs): 00:12:14.189 READ: bw=84.8MiB/s (89.0MB/s), 84.8MiB/s-84.8MiB/s (89.0MB/s-89.0MB/s), io=170MiB (178MB), run=2001-2001msec 00:12:14.189 WRITE: bw=84.2MiB/s (88.3MB/s), 84.2MiB/s-84.2MiB/s (88.3MB/s-88.3MB/s), io=168MiB (177MB), run=2001-2001msec 00:12:14.189 ----------------------------------------------------- 00:12:14.189 Suppressions used: 00:12:14.189 count bytes template 00:12:14.189 1 32 /usr/src/fio/parse.c 00:12:14.189 1 8 libtcmalloc_minimal.so 00:12:14.189 ----------------------------------------------------- 00:12:14.189 00:12:14.189 ************************************ 00:12:14.189 END TEST nvme_fio 00:12:14.189 ************************************ 00:12:14.189 20:18:43 -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:14.189 20:18:43 -- nvme/nvme.sh@46 -- # true 00:12:14.189 00:12:14.189 real 0m19.942s 00:12:14.189 user 0m14.658s 00:12:14.189 sys 0m6.728s 00:12:14.189 20:18:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:14.189 20:18:43 -- common/autotest_common.sh@10 -- # set +x 00:12:14.189 ************************************ 00:12:14.189 END TEST nvme 00:12:14.189 ************************************ 00:12:14.189 00:12:14.189 real 1m36.211s 00:12:14.189 user 3m45.552s 00:12:14.189 sys 0m25.278s 00:12:14.189 20:18:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:14.189 20:18:43 -- common/autotest_common.sh@10 -- # set +x 00:12:14.189 20:18:43 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:12:14.189 20:18:43 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:14.189 20:18:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:14.189 20:18:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:14.189 20:18:43 -- common/autotest_common.sh@10 -- # set +x 00:12:14.189 ************************************ 00:12:14.189 START TEST nvme_scc 00:12:14.189 ************************************ 00:12:14.189 20:18:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:14.189 * Looking for test storage... 00:12:14.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:14.189 20:18:43 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:14.189 20:18:43 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:14.189 20:18:43 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:14.189 20:18:43 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:14.189 20:18:43 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:14.189 20:18:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.189 20:18:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.190 20:18:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.190 20:18:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.190 20:18:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.190 20:18:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.190 20:18:43 -- paths/export.sh@5 -- # export PATH 00:12:14.190 20:18:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.190 20:18:43 -- nvme/functions.sh@10 -- # ctrls=() 00:12:14.190 20:18:43 -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:14.190 20:18:43 -- nvme/functions.sh@11 -- # nvmes=() 00:12:14.190 20:18:43 -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:14.190 20:18:43 -- nvme/functions.sh@12 -- # bdfs=() 00:12:14.190 20:18:43 -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:14.190 20:18:43 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:14.190 20:18:43 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:14.190 20:18:43 -- nvme/functions.sh@14 -- # nvme_name= 00:12:14.190 20:18:43 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:14.190 20:18:43 -- nvme/nvme_scc.sh@12 -- # uname 00:12:14.190 20:18:43 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:14.190 20:18:43 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:14.190 20:18:43 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:14.190 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:14.448 Waiting for block devices as requested 00:12:14.708 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:14.708 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:14.708 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:14.984 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.258 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:20.258 20:18:50 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:20.258 20:18:50 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:20.258 20:18:50 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:20.258 20:18:50 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:20.258 20:18:50 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:20.258 20:18:50 -- scripts/common.sh@15 -- # local i 00:12:20.258 20:18:50 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:20.258 20:18:50 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:20.258 20:18:50 -- scripts/common.sh@24 -- # return 0 00:12:20.258 20:18:50 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:20.258 20:18:50 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:20.258 20:18:50 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@18 -- # shift 00:12:20.258 20:18:50 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.258 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:20.258 20:18:50 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:20.258 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.259 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.259 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.259 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:20.260 20:18:50 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:20.260 20:18:50 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:20.260 20:18:50 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:20.260 20:18:50 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@18 -- # shift 00:12:20.260 20:18:50 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:20.260 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.260 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.260 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:20.261 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.261 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.261 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:20.262 20:18:50 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:20.262 20:18:50 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:20.262 20:18:50 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:20.262 20:18:50 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:20.262 20:18:50 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:20.262 20:18:50 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:20.262 20:18:50 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:20.262 20:18:50 -- scripts/common.sh@15 -- # local i 00:12:20.262 20:18:50 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:20.262 20:18:50 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:20.262 20:18:50 -- scripts/common.sh@24 -- # return 0 00:12:20.262 20:18:50 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:20.262 20:18:50 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:20.262 20:18:50 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@18 -- # shift 00:12:20.262 20:18:50 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.262 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.262 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.262 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.263 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:20.263 20:18:50 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:20.263 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.264 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.264 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:20.264 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:20.265 20:18:50 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:20.265 20:18:50 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:20.265 20:18:50 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:20.265 20:18:50 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@18 -- # shift 00:12:20.265 20:18:50 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.265 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.265 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.265 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:20.266 20:18:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.266 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.266 20:18:50 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:20.266 20:18:50 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:20.266 20:18:50 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:20.267 20:18:50 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:20.267 20:18:50 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:20.267 20:18:50 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:20.267 20:18:50 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:20.267 20:18:50 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:20.267 20:18:50 -- scripts/common.sh@15 -- # local i 00:12:20.267 20:18:50 -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:20.267 20:18:50 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:20.267 20:18:50 -- scripts/common.sh@24 -- # return 0 00:12:20.267 20:18:50 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:20.267 20:18:50 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:20.267 20:18:50 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@18 -- # shift 00:12:20.267 20:18:50 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.267 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.267 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:20.267 20:18:50 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.268 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.268 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:20.268 20:18:50 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:20.269 20:18:50 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.269 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:20.269 20:18:50 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:20.269 20:18:50 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:20.269 20:18:50 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:20.269 20:18:50 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:20.269 20:18:50 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:20.269 20:18:50 -- nvme/functions.sh@18 -- # shift 00:12:20.269 20:18:50 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.270 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.270 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.270 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:20.271 20:18:50 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:20.271 20:18:50 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:20.271 20:18:50 -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:20.271 20:18:50 -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@18 -- # shift 00:12:20.271 20:18:50 -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.271 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.271 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:20.271 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:20.272 20:18:50 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:20.272 20:18:50 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:20.272 20:18:50 -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:20.272 20:18:50 -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@18 -- # shift 00:12:20.272 20:18:50 -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.272 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:20.272 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:20.272 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:20.273 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.273 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.273 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:20.274 20:18:50 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:20.274 20:18:50 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:20.274 20:18:50 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:20.274 20:18:50 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:20.274 20:18:50 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:20.274 20:18:50 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:20.274 20:18:50 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:20.274 20:18:50 -- scripts/common.sh@15 -- # local i 00:12:20.274 20:18:50 -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:20.274 20:18:50 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:20.274 20:18:50 -- scripts/common.sh@24 -- # return 0 00:12:20.274 20:18:50 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:20.274 20:18:50 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:20.274 20:18:50 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@18 -- # shift 00:12:20.274 20:18:50 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.274 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:20.274 20:18:50 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:20.274 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:20.275 20:18:50 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.275 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.275 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:20.276 20:18:50 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # IFS=: 00:12:20.276 20:18:50 -- nvme/functions.sh@21 -- # read -r reg val 00:12:20.276 20:18:50 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:20.276 20:18:50 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:20.276 20:18:50 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:20.276 20:18:50 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:20.276 20:18:50 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:20.276 20:18:50 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:20.276 20:18:50 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:20.276 20:18:50 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:12:20.276 20:18:50 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:20.276 20:18:50 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:12:20.277 20:18:50 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:20.277 20:18:50 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:12:20.277 20:18:50 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:12:20.277 20:18:50 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:20.277 20:18:50 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:20.277 20:18:50 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:12:20.277 20:18:50 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:12:20.277 20:18:50 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:12:20.277 20:18:50 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:12:20.277 20:18:50 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:20.277 20:18:50 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:20.277 20:18:50 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:20.277 20:18:50 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:20.277 20:18:50 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:20.277 20:18:50 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:20.277 20:18:50 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:20.277 20:18:50 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:20.277 20:18:50 -- nvme/functions.sh@197 -- # echo nvme1 00:12:20.277 20:18:50 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:20.277 20:18:50 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:12:20.277 20:18:50 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:12:20.277 20:18:50 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:12:20.277 20:18:50 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:12:20.277 20:18:50 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:20.277 20:18:50 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:20.277 20:18:50 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:20.277 20:18:50 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:20.277 20:18:50 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:20.277 20:18:50 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:20.277 20:18:50 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:20.277 20:18:50 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:20.277 20:18:50 -- nvme/functions.sh@197 -- # echo nvme0 00:12:20.277 20:18:50 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:20.277 20:18:50 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:12:20.277 20:18:50 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:12:20.277 20:18:50 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:12:20.277 20:18:50 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:12:20.277 20:18:50 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:20.277 20:18:50 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:20.277 20:18:50 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:20.277 20:18:50 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:20.277 20:18:50 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:20.277 20:18:50 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:20.277 20:18:50 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:20.277 20:18:50 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:20.277 20:18:50 -- nvme/functions.sh@197 -- # echo nvme3 00:12:20.277 20:18:50 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:20.277 20:18:50 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:12:20.277 20:18:50 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:12:20.277 20:18:50 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:12:20.277 20:18:50 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:12:20.277 20:18:50 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:20.277 20:18:50 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:20.277 20:18:50 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:20.277 20:18:50 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:20.277 20:18:50 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:20.277 20:18:50 -- nvme/functions.sh@76 -- # echo 0x15d 00:12:20.277 20:18:50 -- nvme/functions.sh@184 -- # oncs=0x15d 00:12:20.277 20:18:50 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:12:20.277 20:18:50 -- nvme/functions.sh@197 -- # echo nvme2 00:12:20.277 20:18:50 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:12:20.277 20:18:50 -- nvme/functions.sh@206 -- # echo nvme1 00:12:20.277 20:18:50 -- nvme/functions.sh@207 -- # return 0 00:12:20.277 20:18:50 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:20.277 20:18:50 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:12:20.277 20:18:50 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:21.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:21.782 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:21.782 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:21.782 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:21.782 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:22.041 20:18:52 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:22.041 20:18:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:22.041 20:18:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:22.041 20:18:52 -- common/autotest_common.sh@10 -- # set +x 00:12:22.041 ************************************ 00:12:22.041 START TEST nvme_simple_copy 00:12:22.041 ************************************ 00:12:22.041 20:18:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:22.300 Initializing NVMe Controllers 00:12:22.300 Attaching to 0000:00:10.0 00:12:22.300 Controller supports SCC. Attached to 0000:00:10.0 00:12:22.300 Namespace ID: 1 size: 6GB 00:12:22.300 Initialization complete. 00:12:22.300 00:12:22.300 Controller QEMU NVMe Ctrl (12340 ) 00:12:22.300 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:12:22.300 Namespace Block Size:4096 00:12:22.300 Writing LBAs 0 to 63 with Random Data 00:12:22.300 Copied LBAs from 0 - 63 to the Destination LBA 256 00:12:22.300 LBAs matching Written Data: 64 00:12:22.300 00:12:22.300 real 0m0.292s 00:12:22.300 user 0m0.099s 00:12:22.300 sys 0m0.092s 00:12:22.300 20:18:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:22.300 20:18:52 -- common/autotest_common.sh@10 -- # set +x 00:12:22.300 ************************************ 00:12:22.300 END TEST nvme_simple_copy 00:12:22.300 ************************************ 00:12:22.300 00:12:22.300 real 0m8.763s 00:12:22.300 user 0m1.410s 00:12:22.300 sys 0m2.378s 00:12:22.300 20:18:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:22.300 ************************************ 00:12:22.300 20:18:52 -- common/autotest_common.sh@10 -- # set +x 00:12:22.300 END TEST nvme_scc 00:12:22.300 ************************************ 00:12:22.560 20:18:52 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:12:22.560 20:18:52 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:12:22.560 20:18:52 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:12:22.560 20:18:52 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:12:22.560 20:18:52 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:12:22.560 20:18:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:22.560 20:18:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:22.560 20:18:52 -- common/autotest_common.sh@10 -- # set +x 00:12:22.560 ************************************ 00:12:22.560 START TEST nvme_fdp 00:12:22.560 ************************************ 00:12:22.560 20:18:52 -- common/autotest_common.sh@1111 -- # test/nvme/nvme_fdp.sh 00:12:22.560 * Looking for test storage... 00:12:22.560 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:22.560 20:18:52 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:22.560 20:18:52 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:22.560 20:18:52 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:22.560 20:18:52 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:22.560 20:18:52 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:22.560 20:18:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.560 20:18:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.560 20:18:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.560 20:18:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.560 20:18:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.560 20:18:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.560 20:18:52 -- paths/export.sh@5 -- # export PATH 00:12:22.560 20:18:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.560 20:18:52 -- nvme/functions.sh@10 -- # ctrls=() 00:12:22.560 20:18:52 -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:22.560 20:18:52 -- nvme/functions.sh@11 -- # nvmes=() 00:12:22.560 20:18:52 -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:22.560 20:18:52 -- nvme/functions.sh@12 -- # bdfs=() 00:12:22.560 20:18:52 -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:22.560 20:18:52 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:22.560 20:18:52 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:22.560 20:18:52 -- nvme/functions.sh@14 -- # nvme_name= 00:12:22.560 20:18:52 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:22.560 20:18:52 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:23.127 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:23.385 Waiting for block devices as requested 00:12:23.385 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.645 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.645 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:23.904 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:29.192 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:29.192 20:18:58 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:12:29.192 20:18:58 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:29.192 20:18:58 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:29.192 20:18:58 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:29.192 20:18:58 -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:29.192 20:18:59 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:29.192 20:18:59 -- scripts/common.sh@15 -- # local i 00:12:29.192 20:18:59 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:12:29.192 20:18:59 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:29.192 20:18:59 -- scripts/common.sh@24 -- # return 0 00:12:29.192 20:18:59 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:29.192 20:18:59 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:29.192 20:18:59 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@18 -- # shift 00:12:29.192 20:18:59 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.192 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:29.192 20:18:59 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.192 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.193 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.193 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:29.193 20:18:59 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.194 20:18:59 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:29.194 20:18:59 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.194 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:29.195 20:18:59 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:29.195 20:18:59 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:29.195 20:18:59 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:29.195 20:18:59 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@18 -- # shift 00:12:29.195 20:18:59 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.195 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:29.195 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:29.195 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:29.196 20:18:59 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:29.196 20:18:59 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:29.196 20:18:59 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:29.196 20:18:59 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:29.196 20:18:59 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:29.196 20:18:59 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:29.196 20:18:59 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:29.196 20:18:59 -- scripts/common.sh@15 -- # local i 00:12:29.196 20:18:59 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:12:29.196 20:18:59 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:29.196 20:18:59 -- scripts/common.sh@24 -- # return 0 00:12:29.196 20:18:59 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:29.196 20:18:59 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:29.196 20:18:59 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@18 -- # shift 00:12:29.196 20:18:59 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.196 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.196 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:29.196 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.197 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:29.197 20:18:59 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.197 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:29.198 20:18:59 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.198 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.198 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:29.199 20:18:59 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:29.199 20:18:59 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:29.199 20:18:59 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:29.199 20:18:59 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@18 -- # shift 00:12:29.199 20:18:59 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.199 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:29.199 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.199 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.200 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:29.200 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.200 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:29.201 20:18:59 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:29.201 20:18:59 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:29.201 20:18:59 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:29.201 20:18:59 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:29.201 20:18:59 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:29.201 20:18:59 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:29.201 20:18:59 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:29.201 20:18:59 -- scripts/common.sh@15 -- # local i 00:12:29.201 20:18:59 -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:12:29.201 20:18:59 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:29.201 20:18:59 -- scripts/common.sh@24 -- # return 0 00:12:29.201 20:18:59 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:29.201 20:18:59 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:29.201 20:18:59 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@18 -- # shift 00:12:29.201 20:18:59 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.201 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:29.201 20:18:59 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:29.201 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.202 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:29.202 20:18:59 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.202 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:29.203 20:18:59 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.203 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.203 20:18:59 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:29.203 20:18:59 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:29.203 20:18:59 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:29.203 20:18:59 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:29.204 20:18:59 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:29.204 20:18:59 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@18 -- # shift 00:12:29.204 20:18:59 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.204 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.204 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:29.204 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:29.205 20:18:59 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:29.205 20:18:59 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:29.205 20:18:59 -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:29.205 20:18:59 -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@18 -- # shift 00:12:29.205 20:18:59 -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.205 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.205 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:29.205 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:29.206 20:18:59 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:12:29.206 20:18:59 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:29.206 20:18:59 -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:29.206 20:18:59 -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@18 -- # shift 00:12:29.206 20:18:59 -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.206 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:29.206 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:29.206 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.207 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.207 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:29.207 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:29.208 20:18:59 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:29.208 20:18:59 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:29.208 20:18:59 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:29.208 20:18:59 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:29.208 20:18:59 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:29.208 20:18:59 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:29.208 20:18:59 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:29.208 20:18:59 -- scripts/common.sh@15 -- # local i 00:12:29.208 20:18:59 -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:12:29.208 20:18:59 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:12:29.208 20:18:59 -- scripts/common.sh@24 -- # return 0 00:12:29.208 20:18:59 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:29.208 20:18:59 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:29.208 20:18:59 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@18 -- # shift 00:12:29.208 20:18:59 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.208 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.208 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:29.208 20:18:59 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.209 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:29.209 20:18:59 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.209 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.210 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:29.210 20:18:59 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.210 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.211 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.211 20:18:59 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.211 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.211 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.211 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.211 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.211 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.211 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.211 20:18:59 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.211 20:18:59 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.211 20:18:59 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:29.211 20:18:59 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # IFS=: 00:12:29.211 20:18:59 -- nvme/functions.sh@21 -- # read -r reg val 00:12:29.211 20:18:59 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:29.211 20:18:59 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:29.211 20:18:59 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:29.211 20:18:59 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:29.211 20:18:59 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:29.211 20:18:59 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:29.211 20:18:59 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:12:29.211 20:18:59 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:12:29.211 20:18:59 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:29.211 20:18:59 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:12:29.211 20:18:59 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:12:29.211 20:18:59 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:12:29.211 20:18:59 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:12:29.211 20:18:59 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:29.211 20:18:59 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:12:29.211 20:18:59 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:12:29.211 20:18:59 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:12:29.211 20:18:59 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:12:29.211 20:18:59 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:12:29.211 20:18:59 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:12:29.211 20:18:59 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:29.211 20:18:59 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:29.211 20:18:59 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:29.211 20:18:59 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:29.211 20:18:59 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:29.211 20:18:59 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:12:29.211 20:18:59 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:12:29.211 20:18:59 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:12:29.211 20:18:59 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:12:29.211 20:18:59 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:12:29.211 20:18:59 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:12:29.211 20:18:59 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:29.211 20:18:59 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:29.211 20:18:59 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:29.211 20:18:59 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:29.211 20:18:59 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:29.211 20:18:59 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:12:29.211 20:18:59 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:12:29.211 20:18:59 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:12:29.211 20:18:59 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:12:29.211 20:18:59 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:12:29.211 20:18:59 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:12:29.211 20:18:59 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:29.211 20:18:59 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@76 -- # echo 0x88010 00:12:29.211 20:18:59 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:12:29.211 20:18:59 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:29.211 20:18:59 -- nvme/functions.sh@197 -- # echo nvme3 00:12:29.211 20:18:59 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:12:29.211 20:18:59 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:12:29.211 20:18:59 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:12:29.211 20:18:59 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:12:29.211 20:18:59 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:12:29.211 20:18:59 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:12:29.211 20:18:59 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:12:29.211 20:18:59 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:29.211 20:18:59 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:12:29.211 20:18:59 -- nvme/functions.sh@76 -- # echo 0x8000 00:12:29.211 20:18:59 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:12:29.211 20:18:59 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:12:29.211 20:18:59 -- nvme/functions.sh@204 -- # trap - ERR 00:12:29.211 20:18:59 -- nvme/functions.sh@204 -- # print_backtrace 00:12:29.211 20:18:59 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:12:29.211 20:18:59 -- common/autotest_common.sh@1139 -- # return 0 00:12:29.211 20:18:59 -- nvme/functions.sh@204 -- # trap - ERR 00:12:29.211 20:18:59 -- nvme/functions.sh@204 -- # print_backtrace 00:12:29.211 20:18:59 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:12:29.211 20:18:59 -- common/autotest_common.sh@1139 -- # return 0 00:12:29.211 20:18:59 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:12:29.211 20:18:59 -- nvme/functions.sh@206 -- # echo nvme3 00:12:29.211 20:18:59 -- nvme/functions.sh@207 -- # return 0 00:12:29.211 20:18:59 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:12:29.211 20:18:59 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:13.0 00:12:29.211 20:18:59 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:30.156 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:30.748 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:30.748 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:30.748 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:30.748 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:31.007 20:19:01 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:31.007 20:19:01 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:12:31.007 20:19:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.007 20:19:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.007 ************************************ 00:12:31.007 START TEST nvme_flexible_data_placement 00:12:31.007 ************************************ 00:12:31.007 20:19:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:12:31.266 Initializing NVMe Controllers 00:12:31.266 Attaching to 0000:00:13.0 00:12:31.266 Controller supports FDP Attached to 0000:00:13.0 00:12:31.266 Namespace ID: 1 Endurance Group ID: 1 00:12:31.266 Initialization complete. 00:12:31.266 00:12:31.266 ================================== 00:12:31.266 == FDP tests for Namespace: #01 == 00:12:31.266 ================================== 00:12:31.266 00:12:31.266 Get Feature: FDP: 00:12:31.266 ================= 00:12:31.266 Enabled: Yes 00:12:31.266 FDP configuration Index: 0 00:12:31.266 00:12:31.266 FDP configurations log page 00:12:31.266 =========================== 00:12:31.266 Number of FDP configurations: 1 00:12:31.266 Version: 0 00:12:31.266 Size: 112 00:12:31.266 FDP Configuration Descriptor: 0 00:12:31.266 Descriptor Size: 96 00:12:31.266 Reclaim Group Identifier format: 2 00:12:31.266 FDP Volatile Write Cache: Not Present 00:12:31.266 FDP Configuration: Valid 00:12:31.266 Vendor Specific Size: 0 00:12:31.266 Number of Reclaim Groups: 2 00:12:31.266 Number of Recalim Unit Handles: 8 00:12:31.266 Max Placement Identifiers: 128 00:12:31.266 Number of Namespaces Suppprted: 256 00:12:31.266 Reclaim unit Nominal Size: 6000000 bytes 00:12:31.266 Estimated Reclaim Unit Time Limit: Not Reported 00:12:31.266 RUH Desc #000: RUH Type: Initially Isolated 00:12:31.266 RUH Desc #001: RUH Type: Initially Isolated 00:12:31.266 RUH Desc #002: RUH Type: Initially Isolated 00:12:31.266 RUH Desc #003: RUH Type: Initially Isolated 00:12:31.266 RUH Desc #004: RUH Type: Initially Isolated 00:12:31.266 RUH Desc #005: RUH Type: Initially Isolated 00:12:31.266 RUH Desc #006: RUH Type: Initially Isolated 00:12:31.266 RUH Desc #007: RUH Type: Initially Isolated 00:12:31.266 00:12:31.266 FDP reclaim unit handle usage log page 00:12:31.266 ====================================== 00:12:31.266 Number of Reclaim Unit Handles: 8 00:12:31.266 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:12:31.266 RUH Usage Desc #001: RUH Attributes: Unused 00:12:31.266 RUH Usage Desc #002: RUH Attributes: Unused 00:12:31.266 RUH Usage Desc #003: RUH Attributes: Unused 00:12:31.266 RUH Usage Desc #004: RUH Attributes: Unused 00:12:31.266 RUH Usage Desc #005: RUH Attributes: Unused 00:12:31.266 RUH Usage Desc #006: RUH Attributes: Unused 00:12:31.266 RUH Usage Desc #007: RUH Attributes: Unused 00:12:31.266 00:12:31.266 FDP statistics log page 00:12:31.266 ======================= 00:12:31.266 Host bytes with metadata written: 930414592 00:12:31.266 Media bytes with metadata written: 930504704 00:12:31.266 Media bytes erased: 0 00:12:31.266 00:12:31.266 FDP Reclaim unit handle status 00:12:31.266 ============================== 00:12:31.266 Number of RUHS descriptors: 2 00:12:31.266 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000048b0 00:12:31.266 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:12:31.266 00:12:31.266 FDP write on placement id: 0 success 00:12:31.266 00:12:31.266 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:12:31.266 00:12:31.266 IO mgmt send: RUH update for Placement ID: #0 Success 00:12:31.266 00:12:31.266 Get Feature: FDP Events for Placement handle: #0 00:12:31.266 ======================== 00:12:31.266 Number of FDP Events: 6 00:12:31.266 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:12:31.266 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:12:31.266 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:12:31.267 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:12:31.267 FDP Event: #4 Type: Media Reallocated Enabled: No 00:12:31.267 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:12:31.267 00:12:31.267 FDP events log page 00:12:31.267 =================== 00:12:31.267 Number of FDP events: 1 00:12:31.267 FDP Event #0: 00:12:31.267 Event Type: RU Not Written to Capacity 00:12:31.267 Placement Identifier: Valid 00:12:31.267 NSID: Valid 00:12:31.267 Location: Valid 00:12:31.267 Placement Identifier: 0 00:12:31.267 Event Timestamp: b 00:12:31.267 Namespace Identifier: 1 00:12:31.267 Reclaim Group Identifier: 0 00:12:31.267 Reclaim Unit Handle Identifier: 0 00:12:31.267 00:12:31.267 FDP test passed 00:12:31.267 00:12:31.267 real 0m0.279s 00:12:31.267 user 0m0.097s 00:12:31.267 sys 0m0.080s 00:12:31.267 20:19:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.267 20:19:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.267 ************************************ 00:12:31.267 END TEST nvme_flexible_data_placement 00:12:31.267 ************************************ 00:12:31.267 00:12:31.267 real 0m8.823s 00:12:31.267 user 0m1.472s 00:12:31.267 sys 0m2.439s 00:12:31.267 20:19:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:31.267 20:19:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.267 ************************************ 00:12:31.267 END TEST nvme_fdp 00:12:31.267 ************************************ 00:12:31.527 20:19:01 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:12:31.527 20:19:01 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:31.527 20:19:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:31.527 20:19:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:31.527 20:19:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.527 ************************************ 00:12:31.527 START TEST nvme_rpc 00:12:31.527 ************************************ 00:12:31.527 20:19:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:12:31.527 * Looking for test storage... 00:12:31.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:31.527 20:19:01 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:31.527 20:19:01 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:12:31.527 20:19:01 -- common/autotest_common.sh@1510 -- # bdfs=() 00:12:31.527 20:19:01 -- common/autotest_common.sh@1510 -- # local bdfs 00:12:31.527 20:19:01 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:12:31.527 20:19:01 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:12:31.527 20:19:01 -- common/autotest_common.sh@1499 -- # bdfs=() 00:12:31.527 20:19:01 -- common/autotest_common.sh@1499 -- # local bdfs 00:12:31.527 20:19:01 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:31.527 20:19:01 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:31.527 20:19:01 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:12:31.786 20:19:01 -- common/autotest_common.sh@1501 -- # (( 4 == 0 )) 00:12:31.786 20:19:01 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:31.786 20:19:01 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:12:31.786 20:19:01 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:12:31.786 20:19:01 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=72683 00:12:31.786 20:19:01 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:31.786 20:19:01 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:12:31.786 20:19:01 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 72683 00:12:31.786 20:19:01 -- common/autotest_common.sh@817 -- # '[' -z 72683 ']' 00:12:31.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.786 20:19:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.786 20:19:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:31.786 20:19:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.786 20:19:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:31.786 20:19:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.786 [2024-04-24 20:19:01.965815] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:12:31.786 [2024-04-24 20:19:01.965944] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72683 ] 00:12:32.045 [2024-04-24 20:19:02.141020] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:32.303 [2024-04-24 20:19:02.382977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.303 [2024-04-24 20:19:02.383012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.238 20:19:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:33.238 20:19:03 -- common/autotest_common.sh@850 -- # return 0 00:12:33.238 20:19:03 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:12:33.496 Nvme0n1 00:12:33.496 20:19:03 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:12:33.496 20:19:03 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:12:33.756 request: 00:12:33.756 { 00:12:33.756 "filename": "non_existing_file", 00:12:33.756 "bdev_name": "Nvme0n1", 00:12:33.756 "method": "bdev_nvme_apply_firmware", 00:12:33.756 "req_id": 1 00:12:33.756 } 00:12:33.756 Got JSON-RPC error response 00:12:33.756 response: 00:12:33.756 { 00:12:33.756 "code": -32603, 00:12:33.756 "message": "open file failed." 00:12:33.756 } 00:12:33.756 20:19:03 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:12:33.756 20:19:03 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:12:33.756 20:19:03 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:12:34.015 20:19:04 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:34.015 20:19:04 -- nvme/nvme_rpc.sh@40 -- # killprocess 72683 00:12:34.015 20:19:04 -- common/autotest_common.sh@936 -- # '[' -z 72683 ']' 00:12:34.015 20:19:04 -- common/autotest_common.sh@940 -- # kill -0 72683 00:12:34.015 20:19:04 -- common/autotest_common.sh@941 -- # uname 00:12:34.015 20:19:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:34.015 20:19:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72683 00:12:34.015 killing process with pid 72683 00:12:34.015 20:19:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:34.015 20:19:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:34.015 20:19:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72683' 00:12:34.015 20:19:04 -- common/autotest_common.sh@955 -- # kill 72683 00:12:34.015 20:19:04 -- common/autotest_common.sh@960 -- # wait 72683 00:12:36.552 ************************************ 00:12:36.552 END TEST nvme_rpc 00:12:36.552 ************************************ 00:12:36.552 00:12:36.552 real 0m4.782s 00:12:36.552 user 0m8.546s 00:12:36.552 sys 0m0.733s 00:12:36.552 20:19:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:36.552 20:19:06 -- common/autotest_common.sh@10 -- # set +x 00:12:36.552 20:19:06 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:36.552 20:19:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:36.552 20:19:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:36.552 20:19:06 -- common/autotest_common.sh@10 -- # set +x 00:12:36.552 ************************************ 00:12:36.552 START TEST nvme_rpc_timeouts 00:12:36.552 ************************************ 00:12:36.552 20:19:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:12:36.552 * Looking for test storage... 00:12:36.552 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:36.552 20:19:06 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:36.552 20:19:06 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_72765 00:12:36.552 20:19:06 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_72765 00:12:36.552 20:19:06 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=72793 00:12:36.552 20:19:06 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:12:36.552 20:19:06 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:12:36.552 20:19:06 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 72793 00:12:36.552 20:19:06 -- common/autotest_common.sh@817 -- # '[' -z 72793 ']' 00:12:36.552 20:19:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.552 20:19:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:36.552 20:19:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.552 20:19:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:36.552 20:19:06 -- common/autotest_common.sh@10 -- # set +x 00:12:36.552 [2024-04-24 20:19:06.783556] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:12:36.552 [2024-04-24 20:19:06.783669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72793 ] 00:12:36.810 [2024-04-24 20:19:06.955780] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:37.068 [2024-04-24 20:19:07.204950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.068 [2024-04-24 20:19:07.204984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.003 Checking default timeout settings: 00:12:38.003 20:19:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:38.003 20:19:08 -- common/autotest_common.sh@850 -- # return 0 00:12:38.003 20:19:08 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:12:38.003 20:19:08 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:38.261 Making settings changes with rpc: 00:12:38.261 20:19:08 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:12:38.261 20:19:08 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:12:38.519 Check default vs. modified settings: 00:12:38.519 20:19:08 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:12:38.519 20:19:08 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:38.778 20:19:08 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:12:38.778 20:19:08 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:38.778 20:19:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_72765 00:12:38.778 20:19:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:38.778 20:19:08 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:38.778 20:19:09 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:12:38.778 20:19:09 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_72765 00:12:38.778 20:19:09 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:38.778 20:19:09 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:39.037 Setting action_on_timeout is changed as expected. 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_72765 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_72765 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:39.037 Setting timeout_us is changed as expected. 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_72765 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_72765 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:12:39.037 Setting timeout_admin_us is changed as expected. 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_72765 /tmp/settings_modified_72765 00:12:39.037 20:19:09 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 72793 00:12:39.037 20:19:09 -- common/autotest_common.sh@936 -- # '[' -z 72793 ']' 00:12:39.037 20:19:09 -- common/autotest_common.sh@940 -- # kill -0 72793 00:12:39.037 20:19:09 -- common/autotest_common.sh@941 -- # uname 00:12:39.037 20:19:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:39.037 20:19:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72793 00:12:39.037 killing process with pid 72793 00:12:39.037 20:19:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:39.037 20:19:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:39.037 20:19:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72793' 00:12:39.037 20:19:09 -- common/autotest_common.sh@955 -- # kill 72793 00:12:39.037 20:19:09 -- common/autotest_common.sh@960 -- # wait 72793 00:12:41.603 RPC TIMEOUT SETTING TEST PASSED. 00:12:41.603 20:19:11 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:12:41.603 ************************************ 00:12:41.603 END TEST nvme_rpc_timeouts 00:12:41.603 ************************************ 00:12:41.603 00:12:41.603 real 0m5.024s 00:12:41.603 user 0m9.177s 00:12:41.603 sys 0m0.770s 00:12:41.603 20:19:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:41.603 20:19:11 -- common/autotest_common.sh@10 -- # set +x 00:12:41.603 20:19:11 -- spdk/autotest.sh@241 -- # '[' 1 -eq 0 ']' 00:12:41.603 20:19:11 -- spdk/autotest.sh@245 -- # [[ 1 -eq 1 ]] 00:12:41.603 20:19:11 -- spdk/autotest.sh@246 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:41.603 20:19:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:41.603 20:19:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.603 20:19:11 -- common/autotest_common.sh@10 -- # set +x 00:12:41.603 ************************************ 00:12:41.603 START TEST nvme_xnvme 00:12:41.603 ************************************ 00:12:41.604 20:19:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:41.863 * Looking for test storage... 00:12:41.863 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:41.863 20:19:11 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:41.863 20:19:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.863 20:19:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.863 20:19:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.863 20:19:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.863 20:19:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.863 20:19:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.863 20:19:11 -- paths/export.sh@5 -- # export PATH 00:12:41.863 20:19:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:41.863 20:19:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:41.863 20:19:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.863 20:19:11 -- common/autotest_common.sh@10 -- # set +x 00:12:41.863 ************************************ 00:12:41.863 START TEST xnvme_to_malloc_dd_copy 00:12:41.863 ************************************ 00:12:41.863 20:19:11 -- common/autotest_common.sh@1111 -- # malloc_to_xnvme_copy 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:41.863 20:19:11 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:41.863 20:19:11 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:41.863 20:19:11 -- dd/common.sh@191 -- # return 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@18 -- # local io 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:41.863 20:19:11 -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:41.863 20:19:11 -- dd/common.sh@31 -- # xtrace_disable 00:12:41.863 20:19:11 -- common/autotest_common.sh@10 -- # set +x 00:12:41.863 { 00:12:41.863 "subsystems": [ 00:12:41.863 { 00:12:41.863 "subsystem": "bdev", 00:12:41.863 "config": [ 00:12:41.863 { 00:12:41.863 "params": { 00:12:41.863 "block_size": 512, 00:12:41.863 "num_blocks": 2097152, 00:12:41.863 "name": "malloc0" 00:12:41.863 }, 00:12:41.863 "method": "bdev_malloc_create" 00:12:41.863 }, 00:12:41.863 { 00:12:41.863 "params": { 00:12:41.863 "io_mechanism": "libaio", 00:12:41.863 "filename": "/dev/nullb0", 00:12:41.863 "name": "null0" 00:12:41.863 }, 00:12:41.863 "method": "bdev_xnvme_create" 00:12:41.863 }, 00:12:41.863 { 00:12:41.863 "method": "bdev_wait_for_examine" 00:12:41.863 } 00:12:41.863 ] 00:12:41.863 } 00:12:41.863 ] 00:12:41.863 } 00:12:41.863 [2024-04-24 20:19:12.070098] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:12:41.863 [2024-04-24 20:19:12.070202] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72938 ] 00:12:42.123 [2024-04-24 20:19:12.238425] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.394 [2024-04-24 20:19:12.513456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.420  Copying: 257/1024 [MB] (257 MBps) Copying: 516/1024 [MB] (258 MBps) Copying: 770/1024 [MB] (254 MBps) Copying: 1024/1024 [MB] (average 257 MBps) 00:12:52.420 00:12:52.420 20:19:22 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:52.420 20:19:22 -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:52.420 20:19:22 -- dd/common.sh@31 -- # xtrace_disable 00:12:52.420 20:19:22 -- common/autotest_common.sh@10 -- # set +x 00:12:52.420 { 00:12:52.420 "subsystems": [ 00:12:52.420 { 00:12:52.420 "subsystem": "bdev", 00:12:52.420 "config": [ 00:12:52.420 { 00:12:52.420 "params": { 00:12:52.420 "block_size": 512, 00:12:52.420 "num_blocks": 2097152, 00:12:52.420 "name": "malloc0" 00:12:52.420 }, 00:12:52.420 "method": "bdev_malloc_create" 00:12:52.420 }, 00:12:52.420 { 00:12:52.420 "params": { 00:12:52.420 "io_mechanism": "libaio", 00:12:52.420 "filename": "/dev/nullb0", 00:12:52.420 "name": "null0" 00:12:52.420 }, 00:12:52.420 "method": "bdev_xnvme_create" 00:12:52.420 }, 00:12:52.420 { 00:12:52.420 "method": "bdev_wait_for_examine" 00:12:52.420 } 00:12:52.420 ] 00:12:52.420 } 00:12:52.420 ] 00:12:52.420 } 00:12:52.420 [2024-04-24 20:19:22.323103] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:12:52.420 [2024-04-24 20:19:22.323231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73061 ] 00:12:52.420 [2024-04-24 20:19:22.495492] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.682 [2024-04-24 20:19:22.731577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.718  Copying: 258/1024 [MB] (258 MBps) Copying: 506/1024 [MB] (247 MBps) Copying: 752/1024 [MB] (246 MBps) Copying: 995/1024 [MB] (242 MBps) Copying: 1024/1024 [MB] (average 248 MBps) 00:13:02.718 00:13:02.718 20:19:32 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:02.718 20:19:32 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:02.718 20:19:32 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:02.718 20:19:32 -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:02.718 20:19:32 -- dd/common.sh@31 -- # xtrace_disable 00:13:02.718 20:19:32 -- common/autotest_common.sh@10 -- # set +x 00:13:02.718 { 00:13:02.718 "subsystems": [ 00:13:02.718 { 00:13:02.718 "subsystem": "bdev", 00:13:02.718 "config": [ 00:13:02.718 { 00:13:02.718 "params": { 00:13:02.718 "block_size": 512, 00:13:02.718 "num_blocks": 2097152, 00:13:02.718 "name": "malloc0" 00:13:02.718 }, 00:13:02.718 "method": "bdev_malloc_create" 00:13:02.718 }, 00:13:02.718 { 00:13:02.718 "params": { 00:13:02.718 "io_mechanism": "io_uring", 00:13:02.718 "filename": "/dev/nullb0", 00:13:02.718 "name": "null0" 00:13:02.718 }, 00:13:02.718 "method": "bdev_xnvme_create" 00:13:02.718 }, 00:13:02.718 { 00:13:02.718 "method": "bdev_wait_for_examine" 00:13:02.718 } 00:13:02.718 ] 00:13:02.718 } 00:13:02.718 ] 00:13:02.718 } 00:13:02.977 [2024-04-24 20:19:32.961254] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:13:02.977 [2024-04-24 20:19:32.961382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73183 ] 00:13:02.977 [2024-04-24 20:19:33.134562] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.235 [2024-04-24 20:19:33.390583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:13.202  Copying: 271/1024 [MB] (271 MBps) Copying: 544/1024 [MB] (272 MBps) Copying: 815/1024 [MB] (270 MBps) Copying: 1024/1024 [MB] (average 272 MBps) 00:13:13.202 00:13:13.202 20:19:43 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:13.202 20:19:43 -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:13.202 20:19:43 -- dd/common.sh@31 -- # xtrace_disable 00:13:13.202 20:19:43 -- common/autotest_common.sh@10 -- # set +x 00:13:13.202 { 00:13:13.202 "subsystems": [ 00:13:13.202 { 00:13:13.202 "subsystem": "bdev", 00:13:13.202 "config": [ 00:13:13.202 { 00:13:13.202 "params": { 00:13:13.202 "block_size": 512, 00:13:13.202 "num_blocks": 2097152, 00:13:13.202 "name": "malloc0" 00:13:13.202 }, 00:13:13.202 "method": "bdev_malloc_create" 00:13:13.202 }, 00:13:13.202 { 00:13:13.202 "params": { 00:13:13.202 "io_mechanism": "io_uring", 00:13:13.202 "filename": "/dev/nullb0", 00:13:13.202 "name": "null0" 00:13:13.202 }, 00:13:13.202 "method": "bdev_xnvme_create" 00:13:13.202 }, 00:13:13.202 { 00:13:13.202 "method": "bdev_wait_for_examine" 00:13:13.202 } 00:13:13.202 ] 00:13:13.202 } 00:13:13.202 ] 00:13:13.202 } 00:13:13.202 [2024-04-24 20:19:43.132973] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:13:13.202 [2024-04-24 20:19:43.133109] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73298 ] 00:13:13.202 [2024-04-24 20:19:43.305120] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.462 [2024-04-24 20:19:43.554359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.379  Copying: 273/1024 [MB] (273 MBps) Copying: 517/1024 [MB] (243 MBps) Copying: 769/1024 [MB] (252 MBps) Copying: 1024/1024 [MB] (average 256 MBps) 00:13:23.379 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:23.379 20:19:53 -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:23.379 00:13:23.379 real 0m41.406s 00:13:23.379 user 0m36.457s 00:13:23.379 sys 0m4.407s 00:13:23.379 20:19:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:23.379 20:19:53 -- common/autotest_common.sh@10 -- # set +x 00:13:23.379 ************************************ 00:13:23.379 END TEST xnvme_to_malloc_dd_copy 00:13:23.379 ************************************ 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:23.379 20:19:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:23.379 20:19:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:23.379 20:19:53 -- common/autotest_common.sh@10 -- # set +x 00:13:23.379 ************************************ 00:13:23.379 START TEST xnvme_bdevperf 00:13:23.379 ************************************ 00:13:23.379 20:19:53 -- common/autotest_common.sh@1111 -- # xnvme_bdevperf 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:23.379 20:19:53 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:23.379 20:19:53 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:23.379 20:19:53 -- dd/common.sh@191 -- # return 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@60 -- # local io 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:23.379 20:19:53 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:23.379 20:19:53 -- dd/common.sh@31 -- # xtrace_disable 00:13:23.379 20:19:53 -- common/autotest_common.sh@10 -- # set +x 00:13:23.379 { 00:13:23.379 "subsystems": [ 00:13:23.379 { 00:13:23.379 "subsystem": "bdev", 00:13:23.379 "config": [ 00:13:23.379 { 00:13:23.379 "params": { 00:13:23.379 "io_mechanism": "libaio", 00:13:23.379 "filename": "/dev/nullb0", 00:13:23.379 "name": "null0" 00:13:23.379 }, 00:13:23.379 "method": "bdev_xnvme_create" 00:13:23.379 }, 00:13:23.379 { 00:13:23.379 "method": "bdev_wait_for_examine" 00:13:23.379 } 00:13:23.379 ] 00:13:23.379 } 00:13:23.379 ] 00:13:23.379 } 00:13:23.379 [2024-04-24 20:19:53.595532] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:13:23.379 [2024-04-24 20:19:53.595650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73445 ] 00:13:23.639 [2024-04-24 20:19:53.748556] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.897 [2024-04-24 20:19:54.004188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.157 Running I/O for 5 seconds... 00:13:29.447 00:13:29.447 Latency(us) 00:13:29.447 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.447 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:29.447 null0 : 5.00 160527.48 627.06 0.00 0.00 396.33 118.44 523.10 00:13:29.447 =================================================================================================================== 00:13:29.447 Total : 160527.48 627.06 0.00 0.00 396.33 118.44 523.10 00:13:30.819 20:20:00 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:30.819 20:20:00 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:30.819 20:20:00 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:30.819 20:20:00 -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:30.819 20:20:00 -- dd/common.sh@31 -- # xtrace_disable 00:13:30.819 20:20:00 -- common/autotest_common.sh@10 -- # set +x 00:13:30.819 { 00:13:30.819 "subsystems": [ 00:13:30.819 { 00:13:30.819 "subsystem": "bdev", 00:13:30.819 "config": [ 00:13:30.819 { 00:13:30.819 "params": { 00:13:30.819 "io_mechanism": "io_uring", 00:13:30.819 "filename": "/dev/nullb0", 00:13:30.819 "name": "null0" 00:13:30.819 }, 00:13:30.819 "method": "bdev_xnvme_create" 00:13:30.819 }, 00:13:30.819 { 00:13:30.819 "method": "bdev_wait_for_examine" 00:13:30.819 } 00:13:30.819 ] 00:13:30.819 } 00:13:30.819 ] 00:13:30.819 } 00:13:30.819 [2024-04-24 20:20:00.731848] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:13:30.819 [2024-04-24 20:20:00.731975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73525 ] 00:13:30.819 [2024-04-24 20:20:00.888287] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.077 [2024-04-24 20:20:01.152787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.335 Running I/O for 5 seconds... 00:13:36.602 00:13:36.602 Latency(us) 00:13:36.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.602 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:36.602 null0 : 5.00 205703.59 803.53 0.00 0.00 308.82 183.42 1355.46 00:13:36.602 =================================================================================================================== 00:13:36.602 Total : 205703.59 803.53 0.00 0.00 308.82 183.42 1355.46 00:13:37.985 20:20:07 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:37.985 20:20:07 -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:37.985 00:13:37.985 real 0m14.342s 00:13:37.985 user 0m10.977s 00:13:37.985 sys 0m3.160s 00:13:37.985 20:20:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:37.985 ************************************ 00:13:37.985 END TEST xnvme_bdevperf 00:13:37.985 ************************************ 00:13:37.985 20:20:07 -- common/autotest_common.sh@10 -- # set +x 00:13:37.985 00:13:37.985 real 0m56.146s 00:13:37.985 user 0m47.589s 00:13:37.985 sys 0m7.784s 00:13:37.985 20:20:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:37.985 20:20:07 -- common/autotest_common.sh@10 -- # set +x 00:13:37.985 ************************************ 00:13:37.985 END TEST nvme_xnvme 00:13:37.985 ************************************ 00:13:37.985 20:20:07 -- spdk/autotest.sh@247 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:37.985 20:20:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:37.985 20:20:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:37.985 20:20:07 -- common/autotest_common.sh@10 -- # set +x 00:13:37.985 ************************************ 00:13:37.985 START TEST blockdev_xnvme 00:13:37.985 ************************************ 00:13:37.985 20:20:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:37.985 * Looking for test storage... 00:13:37.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:37.985 20:20:08 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:37.985 20:20:08 -- bdev/nbd_common.sh@6 -- # set -e 00:13:37.985 20:20:08 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:37.985 20:20:08 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:37.985 20:20:08 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:37.985 20:20:08 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:37.985 20:20:08 -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:37.985 20:20:08 -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:37.985 20:20:08 -- bdev/blockdev.sh@20 -- # : 00:13:37.985 20:20:08 -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:13:37.985 20:20:08 -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:13:37.985 20:20:08 -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:13:37.985 20:20:08 -- bdev/blockdev.sh@674 -- # uname -s 00:13:37.985 20:20:08 -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:13:37.985 20:20:08 -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:13:37.985 20:20:08 -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:13:37.985 20:20:08 -- bdev/blockdev.sh@683 -- # crypto_device= 00:13:37.985 20:20:08 -- bdev/blockdev.sh@684 -- # dek= 00:13:37.985 20:20:08 -- bdev/blockdev.sh@685 -- # env_ctx= 00:13:37.985 20:20:08 -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:13:37.985 20:20:08 -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:13:37.985 20:20:08 -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:13:37.985 20:20:08 -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:13:37.985 20:20:08 -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:13:37.985 20:20:08 -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73676 00:13:37.985 20:20:08 -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:37.986 20:20:08 -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:37.986 20:20:08 -- bdev/blockdev.sh@49 -- # waitforlisten 73676 00:13:37.986 20:20:08 -- common/autotest_common.sh@817 -- # '[' -z 73676 ']' 00:13:37.986 20:20:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.986 20:20:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:37.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.986 20:20:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.986 20:20:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:37.986 20:20:08 -- common/autotest_common.sh@10 -- # set +x 00:13:38.245 [2024-04-24 20:20:08.262007] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:13:38.245 [2024-04-24 20:20:08.262123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73676 ] 00:13:38.245 [2024-04-24 20:20:08.426231] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.504 [2024-04-24 20:20:08.666176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.439 20:20:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:39.439 20:20:09 -- common/autotest_common.sh@850 -- # return 0 00:13:39.439 20:20:09 -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:13:39.439 20:20:09 -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:13:39.439 20:20:09 -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:39.439 20:20:09 -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:39.439 20:20:09 -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:40.006 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:40.264 Waiting for block devices as requested 00:13:40.522 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:40.522 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:40.522 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:40.782 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:46.119 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:46.119 20:20:15 -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:46.119 20:20:15 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:46.119 20:20:15 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:46.119 20:20:15 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:46.119 20:20:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:46.119 20:20:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:46.119 20:20:15 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:46.119 20:20:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:46.119 20:20:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:46.119 20:20:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:46.119 20:20:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:46.119 20:20:15 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:46.119 20:20:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:46.119 20:20:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:46.119 20:20:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:46.119 20:20:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:46.119 20:20:15 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:46.119 20:20:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:46.119 20:20:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:46.119 20:20:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:46.119 20:20:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:46.119 20:20:15 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:46.119 20:20:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:46.119 20:20:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:46.119 20:20:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:46.119 20:20:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:46.119 20:20:15 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:46.119 20:20:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:46.119 20:20:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:46.119 20:20:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:46.119 20:20:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:46.119 20:20:15 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:46.119 20:20:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:46.119 20:20:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:46.119 20:20:15 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:46.119 20:20:15 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:46.119 20:20:15 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:46.119 20:20:15 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:46.119 20:20:15 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:46.119 20:20:15 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:46.119 20:20:15 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:46.119 20:20:15 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:46.119 20:20:15 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:46.119 20:20:15 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:46.119 20:20:15 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:46.119 20:20:15 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:46.119 20:20:15 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:46.119 20:20:15 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:46.119 20:20:15 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:46.119 20:20:15 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:46.119 20:20:15 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:46.119 20:20:15 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:46.119 20:20:15 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:46.119 20:20:15 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:46.119 20:20:15 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:46.119 20:20:15 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:46.119 20:20:15 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:46.119 20:20:15 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:46.119 20:20:15 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:46.119 20:20:15 -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:46.119 20:20:15 -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:46.119 20:20:15 -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:46.119 20:20:15 -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:46.119 20:20:15 -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:46.119 20:20:15 -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:46.119 20:20:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.119 20:20:15 -- common/autotest_common.sh@10 -- # set +x 00:13:46.119 20:20:15 -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:46.119 nvme0n1 00:13:46.119 nvme1n1 00:13:46.119 nvme2n1 00:13:46.119 nvme2n2 00:13:46.119 nvme2n3 00:13:46.119 nvme3n1 00:13:46.119 20:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.119 20:20:16 -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:13:46.119 20:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.119 20:20:16 -- common/autotest_common.sh@10 -- # set +x 00:13:46.119 20:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.119 20:20:16 -- bdev/blockdev.sh@740 -- # cat 00:13:46.119 20:20:16 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:13:46.119 20:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.119 20:20:16 -- common/autotest_common.sh@10 -- # set +x 00:13:46.119 20:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.119 20:20:16 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:13:46.119 20:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.119 20:20:16 -- common/autotest_common.sh@10 -- # set +x 00:13:46.119 20:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.119 20:20:16 -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:46.119 20:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.119 20:20:16 -- common/autotest_common.sh@10 -- # set +x 00:13:46.119 20:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.119 20:20:16 -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:13:46.119 20:20:16 -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:13:46.119 20:20:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.119 20:20:16 -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:13:46.119 20:20:16 -- common/autotest_common.sh@10 -- # set +x 00:13:46.119 20:20:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.119 20:20:16 -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:13:46.119 20:20:16 -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "8d04516f-d232-4850-abe0-429b42069f0b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8d04516f-d232-4850-abe0-429b42069f0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "03ede18d-8ec9-42dd-bdd8-9207ddc94bf7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "03ede18d-8ec9-42dd-bdd8-9207ddc94bf7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "dc455350-0ac9-47fc-9499-784b6f5ee9f3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dc455350-0ac9-47fc-9499-784b6f5ee9f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "7d1408e2-09fa-4da3-9e17-72f1b800d998"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7d1408e2-09fa-4da3-9e17-72f1b800d998",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "966cd3dd-8832-4310-b63c-c2dfc1118261"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "966cd3dd-8832-4310-b63c-c2dfc1118261",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0833a153-712e-40d3-8253-8740a39feec9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0833a153-712e-40d3-8253-8740a39feec9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:46.119 20:20:16 -- bdev/blockdev.sh@749 -- # jq -r .name 00:13:46.119 20:20:16 -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:13:46.120 20:20:16 -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:13:46.120 20:20:16 -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:13:46.120 20:20:16 -- bdev/blockdev.sh@754 -- # killprocess 73676 00:13:46.120 20:20:16 -- common/autotest_common.sh@936 -- # '[' -z 73676 ']' 00:13:46.120 20:20:16 -- common/autotest_common.sh@940 -- # kill -0 73676 00:13:46.120 20:20:16 -- common/autotest_common.sh@941 -- # uname 00:13:46.120 20:20:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:46.120 20:20:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73676 00:13:46.120 killing process with pid 73676 00:13:46.120 20:20:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:46.120 20:20:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:46.120 20:20:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73676' 00:13:46.120 20:20:16 -- common/autotest_common.sh@955 -- # kill 73676 00:13:46.120 20:20:16 -- common/autotest_common.sh@960 -- # wait 73676 00:13:48.699 20:20:18 -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:48.699 20:20:18 -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:48.699 20:20:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:13:48.699 20:20:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:48.699 20:20:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.699 ************************************ 00:13:48.699 START TEST bdev_hello_world 00:13:48.699 ************************************ 00:13:48.699 20:20:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:48.699 [2024-04-24 20:20:18.848173] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:13:48.699 [2024-04-24 20:20:18.848293] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74060 ] 00:13:48.958 [2024-04-24 20:20:19.017666] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.216 [2024-04-24 20:20:19.257717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.784 [2024-04-24 20:20:19.743109] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:49.784 [2024-04-24 20:20:19.743165] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:49.784 [2024-04-24 20:20:19.743187] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:49.784 [2024-04-24 20:20:19.745227] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:49.784 [2024-04-24 20:20:19.745702] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:49.784 [2024-04-24 20:20:19.745735] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:49.784 [2024-04-24 20:20:19.745979] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:49.784 00:13:49.784 [2024-04-24 20:20:19.746001] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:51.161 00:13:51.161 real 0m2.222s 00:13:51.161 user 0m1.872s 00:13:51.161 sys 0m0.235s 00:13:51.161 20:20:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:51.161 20:20:20 -- common/autotest_common.sh@10 -- # set +x 00:13:51.161 ************************************ 00:13:51.161 END TEST bdev_hello_world 00:13:51.161 ************************************ 00:13:51.161 20:20:21 -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:13:51.161 20:20:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:51.161 20:20:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:51.161 20:20:21 -- common/autotest_common.sh@10 -- # set +x 00:13:51.161 ************************************ 00:13:51.161 START TEST bdev_bounds 00:13:51.161 ************************************ 00:13:51.161 20:20:21 -- common/autotest_common.sh@1111 -- # bdev_bounds '' 00:13:51.161 20:20:21 -- bdev/blockdev.sh@290 -- # bdevio_pid=74106 00:13:51.161 20:20:21 -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:51.161 20:20:21 -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:51.161 Process bdevio pid: 74106 00:13:51.161 20:20:21 -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 74106' 00:13:51.161 20:20:21 -- bdev/blockdev.sh@293 -- # waitforlisten 74106 00:13:51.161 20:20:21 -- common/autotest_common.sh@817 -- # '[' -z 74106 ']' 00:13:51.161 20:20:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.161 20:20:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:51.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.161 20:20:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.161 20:20:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:51.161 20:20:21 -- common/autotest_common.sh@10 -- # set +x 00:13:51.161 [2024-04-24 20:20:21.227598] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:13:51.161 [2024-04-24 20:20:21.227709] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74106 ] 00:13:51.420 [2024-04-24 20:20:21.397897] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:51.420 [2024-04-24 20:20:21.631098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.420 [2024-04-24 20:20:21.631173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.420 [2024-04-24 20:20:21.631198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.990 20:20:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:51.990 20:20:22 -- common/autotest_common.sh@850 -- # return 0 00:13:51.990 20:20:22 -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:52.249 I/O targets: 00:13:52.249 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:52.249 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:52.249 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:52.249 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:52.249 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:52.249 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:52.249 00:13:52.249 00:13:52.249 CUnit - A unit testing framework for C - Version 2.1-3 00:13:52.249 http://cunit.sourceforge.net/ 00:13:52.249 00:13:52.249 00:13:52.249 Suite: bdevio tests on: nvme3n1 00:13:52.249 Test: blockdev write read block ...passed 00:13:52.249 Test: blockdev write zeroes read block ...passed 00:13:52.249 Test: blockdev write zeroes read no split ...passed 00:13:52.249 Test: blockdev write zeroes read split ...passed 00:13:52.249 Test: blockdev write zeroes read split partial ...passed 00:13:52.249 Test: blockdev reset ...passed 00:13:52.249 Test: blockdev write read 8 blocks ...passed 00:13:52.249 Test: blockdev write read size > 128k ...passed 00:13:52.249 Test: blockdev write read invalid size ...passed 00:13:52.249 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:52.249 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:52.249 Test: blockdev write read max offset ...passed 00:13:52.249 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:52.249 Test: blockdev writev readv 8 blocks ...passed 00:13:52.249 Test: blockdev writev readv 30 x 1block ...passed 00:13:52.249 Test: blockdev writev readv block ...passed 00:13:52.249 Test: blockdev writev readv size > 128k ...passed 00:13:52.249 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:52.249 Test: blockdev comparev and writev ...passed 00:13:52.249 Test: blockdev nvme passthru rw ...passed 00:13:52.249 Test: blockdev nvme passthru vendor specific ...passed 00:13:52.249 Test: blockdev nvme admin passthru ...passed 00:13:52.249 Test: blockdev copy ...passed 00:13:52.249 Suite: bdevio tests on: nvme2n3 00:13:52.249 Test: blockdev write read block ...passed 00:13:52.249 Test: blockdev write zeroes read block ...passed 00:13:52.249 Test: blockdev write zeroes read no split ...passed 00:13:52.249 Test: blockdev write zeroes read split ...passed 00:13:52.249 Test: blockdev write zeroes read split partial ...passed 00:13:52.249 Test: blockdev reset ...passed 00:13:52.249 Test: blockdev write read 8 blocks ...passed 00:13:52.249 Test: blockdev write read size > 128k ...passed 00:13:52.249 Test: blockdev write read invalid size ...passed 00:13:52.249 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:52.249 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:52.249 Test: blockdev write read max offset ...passed 00:13:52.249 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:52.249 Test: blockdev writev readv 8 blocks ...passed 00:13:52.249 Test: blockdev writev readv 30 x 1block ...passed 00:13:52.249 Test: blockdev writev readv block ...passed 00:13:52.249 Test: blockdev writev readv size > 128k ...passed 00:13:52.249 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:52.249 Test: blockdev comparev and writev ...passed 00:13:52.249 Test: blockdev nvme passthru rw ...passed 00:13:52.249 Test: blockdev nvme passthru vendor specific ...passed 00:13:52.249 Test: blockdev nvme admin passthru ...passed 00:13:52.249 Test: blockdev copy ...passed 00:13:52.249 Suite: bdevio tests on: nvme2n2 00:13:52.249 Test: blockdev write read block ...passed 00:13:52.249 Test: blockdev write zeroes read block ...passed 00:13:52.249 Test: blockdev write zeroes read no split ...passed 00:13:52.249 Test: blockdev write zeroes read split ...passed 00:13:52.249 Test: blockdev write zeroes read split partial ...passed 00:13:52.249 Test: blockdev reset ...passed 00:13:52.249 Test: blockdev write read 8 blocks ...passed 00:13:52.249 Test: blockdev write read size > 128k ...passed 00:13:52.249 Test: blockdev write read invalid size ...passed 00:13:52.249 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:52.249 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:52.249 Test: blockdev write read max offset ...passed 00:13:52.249 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:52.249 Test: blockdev writev readv 8 blocks ...passed 00:13:52.509 Test: blockdev writev readv 30 x 1block ...passed 00:13:52.509 Test: blockdev writev readv block ...passed 00:13:52.509 Test: blockdev writev readv size > 128k ...passed 00:13:52.509 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:52.509 Test: blockdev comparev and writev ...passed 00:13:52.509 Test: blockdev nvme passthru rw ...passed 00:13:52.509 Test: blockdev nvme passthru vendor specific ...passed 00:13:52.509 Test: blockdev nvme admin passthru ...passed 00:13:52.509 Test: blockdev copy ...passed 00:13:52.509 Suite: bdevio tests on: nvme2n1 00:13:52.509 Test: blockdev write read block ...passed 00:13:52.509 Test: blockdev write zeroes read block ...passed 00:13:52.509 Test: blockdev write zeroes read no split ...passed 00:13:52.509 Test: blockdev write zeroes read split ...passed 00:13:52.509 Test: blockdev write zeroes read split partial ...passed 00:13:52.509 Test: blockdev reset ...passed 00:13:52.509 Test: blockdev write read 8 blocks ...passed 00:13:52.509 Test: blockdev write read size > 128k ...passed 00:13:52.509 Test: blockdev write read invalid size ...passed 00:13:52.509 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:52.509 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:52.509 Test: blockdev write read max offset ...passed 00:13:52.509 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:52.509 Test: blockdev writev readv 8 blocks ...passed 00:13:52.509 Test: blockdev writev readv 30 x 1block ...passed 00:13:52.509 Test: blockdev writev readv block ...passed 00:13:52.509 Test: blockdev writev readv size > 128k ...passed 00:13:52.509 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:52.509 Test: blockdev comparev and writev ...passed 00:13:52.509 Test: blockdev nvme passthru rw ...passed 00:13:52.509 Test: blockdev nvme passthru vendor specific ...passed 00:13:52.509 Test: blockdev nvme admin passthru ...passed 00:13:52.509 Test: blockdev copy ...passed 00:13:52.509 Suite: bdevio tests on: nvme1n1 00:13:52.509 Test: blockdev write read block ...passed 00:13:52.509 Test: blockdev write zeroes read block ...passed 00:13:52.509 Test: blockdev write zeroes read no split ...passed 00:13:52.509 Test: blockdev write zeroes read split ...passed 00:13:52.509 Test: blockdev write zeroes read split partial ...passed 00:13:52.509 Test: blockdev reset ...passed 00:13:52.509 Test: blockdev write read 8 blocks ...passed 00:13:52.509 Test: blockdev write read size > 128k ...passed 00:13:52.509 Test: blockdev write read invalid size ...passed 00:13:52.509 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:52.509 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:52.509 Test: blockdev write read max offset ...passed 00:13:52.509 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:52.509 Test: blockdev writev readv 8 blocks ...passed 00:13:52.509 Test: blockdev writev readv 30 x 1block ...passed 00:13:52.509 Test: blockdev writev readv block ...passed 00:13:52.509 Test: blockdev writev readv size > 128k ...passed 00:13:52.509 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:52.509 Test: blockdev comparev and writev ...passed 00:13:52.509 Test: blockdev nvme passthru rw ...passed 00:13:52.509 Test: blockdev nvme passthru vendor specific ...passed 00:13:52.509 Test: blockdev nvme admin passthru ...passed 00:13:52.509 Test: blockdev copy ...passed 00:13:52.509 Suite: bdevio tests on: nvme0n1 00:13:52.509 Test: blockdev write read block ...passed 00:13:52.509 Test: blockdev write zeroes read block ...passed 00:13:52.509 Test: blockdev write zeroes read no split ...passed 00:13:52.509 Test: blockdev write zeroes read split ...passed 00:13:52.509 Test: blockdev write zeroes read split partial ...passed 00:13:52.509 Test: blockdev reset ...passed 00:13:52.509 Test: blockdev write read 8 blocks ...passed 00:13:52.509 Test: blockdev write read size > 128k ...passed 00:13:52.509 Test: blockdev write read invalid size ...passed 00:13:52.509 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:52.509 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:52.509 Test: blockdev write read max offset ...passed 00:13:52.509 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:52.509 Test: blockdev writev readv 8 blocks ...passed 00:13:52.509 Test: blockdev writev readv 30 x 1block ...passed 00:13:52.509 Test: blockdev writev readv block ...passed 00:13:52.509 Test: blockdev writev readv size > 128k ...passed 00:13:52.509 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:52.509 Test: blockdev comparev and writev ...passed 00:13:52.509 Test: blockdev nvme passthru rw ...passed 00:13:52.509 Test: blockdev nvme passthru vendor specific ...passed 00:13:52.509 Test: blockdev nvme admin passthru ...passed 00:13:52.509 Test: blockdev copy ...passed 00:13:52.509 00:13:52.509 Run Summary: Type Total Ran Passed Failed Inactive 00:13:52.509 suites 6 6 n/a 0 0 00:13:52.509 tests 138 138 138 0 0 00:13:52.509 asserts 780 780 780 0 n/a 00:13:52.509 00:13:52.509 Elapsed time = 1.324 seconds 00:13:52.509 0 00:13:52.769 20:20:22 -- bdev/blockdev.sh@295 -- # killprocess 74106 00:13:52.769 20:20:22 -- common/autotest_common.sh@936 -- # '[' -z 74106 ']' 00:13:52.769 20:20:22 -- common/autotest_common.sh@940 -- # kill -0 74106 00:13:52.769 20:20:22 -- common/autotest_common.sh@941 -- # uname 00:13:52.769 20:20:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:52.769 20:20:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74106 00:13:52.769 20:20:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:52.769 20:20:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:52.769 killing process with pid 74106 00:13:52.769 20:20:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74106' 00:13:52.769 20:20:22 -- common/autotest_common.sh@955 -- # kill 74106 00:13:52.769 20:20:22 -- common/autotest_common.sh@960 -- # wait 74106 00:13:54.148 20:20:24 -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:13:54.148 00:13:54.148 real 0m2.934s 00:13:54.148 user 0m6.735s 00:13:54.148 sys 0m0.423s 00:13:54.148 20:20:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:54.148 20:20:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.148 ************************************ 00:13:54.148 END TEST bdev_bounds 00:13:54.148 ************************************ 00:13:54.148 20:20:24 -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:54.148 20:20:24 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:13:54.148 20:20:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:54.148 20:20:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.148 ************************************ 00:13:54.148 START TEST bdev_nbd 00:13:54.148 ************************************ 00:13:54.148 20:20:24 -- common/autotest_common.sh@1111 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:54.148 20:20:24 -- bdev/blockdev.sh@300 -- # uname -s 00:13:54.148 20:20:24 -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:13:54.148 20:20:24 -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:54.148 20:20:24 -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:54.148 20:20:24 -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:54.148 20:20:24 -- bdev/blockdev.sh@304 -- # local bdev_all 00:13:54.148 20:20:24 -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:13:54.148 20:20:24 -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:13:54.148 20:20:24 -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:54.148 20:20:24 -- bdev/blockdev.sh@311 -- # local nbd_all 00:13:54.148 20:20:24 -- bdev/blockdev.sh@312 -- # bdev_num=6 00:13:54.148 20:20:24 -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:54.148 20:20:24 -- bdev/blockdev.sh@314 -- # local nbd_list 00:13:54.148 20:20:24 -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:54.148 20:20:24 -- bdev/blockdev.sh@315 -- # local bdev_list 00:13:54.148 20:20:24 -- bdev/blockdev.sh@318 -- # nbd_pid=74184 00:13:54.148 20:20:24 -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:54.148 20:20:24 -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:54.148 20:20:24 -- bdev/blockdev.sh@320 -- # waitforlisten 74184 /var/tmp/spdk-nbd.sock 00:13:54.148 20:20:24 -- common/autotest_common.sh@817 -- # '[' -z 74184 ']' 00:13:54.148 20:20:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:54.148 20:20:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:54.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:54.149 20:20:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:54.149 20:20:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:54.149 20:20:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.149 [2024-04-24 20:20:24.324286] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:13:54.149 [2024-04-24 20:20:24.324393] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:54.408 [2024-04-24 20:20:24.491636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.667 [2024-04-24 20:20:24.730136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.236 20:20:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:55.236 20:20:25 -- common/autotest_common.sh@850 -- # return 0 00:13:55.236 20:20:25 -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:55.236 20:20:25 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:55.236 20:20:25 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:55.236 20:20:25 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:55.236 20:20:25 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:55.236 20:20:25 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:55.236 20:20:25 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:55.236 20:20:25 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:55.236 20:20:25 -- bdev/nbd_common.sh@24 -- # local i 00:13:55.236 20:20:25 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:55.236 20:20:25 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:55.236 20:20:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:55.236 20:20:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:55.236 20:20:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:55.236 20:20:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:55.495 20:20:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:55.495 20:20:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:13:55.495 20:20:25 -- common/autotest_common.sh@855 -- # local i 00:13:55.495 20:20:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:55.495 20:20:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:55.495 20:20:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:13:55.495 20:20:25 -- common/autotest_common.sh@859 -- # break 00:13:55.495 20:20:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:55.495 20:20:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:55.495 20:20:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.495 1+0 records in 00:13:55.495 1+0 records out 00:13:55.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690756 s, 5.9 MB/s 00:13:55.495 20:20:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.495 20:20:25 -- common/autotest_common.sh@872 -- # size=4096 00:13:55.495 20:20:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.495 20:20:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:55.495 20:20:25 -- common/autotest_common.sh@875 -- # return 0 00:13:55.495 20:20:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:55.495 20:20:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:55.495 20:20:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:55.821 20:20:25 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:55.821 20:20:25 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:55.821 20:20:25 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:55.821 20:20:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:13:55.821 20:20:25 -- common/autotest_common.sh@855 -- # local i 00:13:55.821 20:20:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:55.821 20:20:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:55.821 20:20:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:13:55.821 20:20:25 -- common/autotest_common.sh@859 -- # break 00:13:55.821 20:20:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:55.821 20:20:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:55.821 20:20:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.821 1+0 records in 00:13:55.821 1+0 records out 00:13:55.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000719644 s, 5.7 MB/s 00:13:55.821 20:20:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.821 20:20:25 -- common/autotest_common.sh@872 -- # size=4096 00:13:55.821 20:20:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.821 20:20:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:55.821 20:20:25 -- common/autotest_common.sh@875 -- # return 0 00:13:55.821 20:20:25 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:55.821 20:20:25 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:55.821 20:20:25 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:55.821 20:20:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:55.821 20:20:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:55.821 20:20:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:55.821 20:20:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:13:55.821 20:20:26 -- common/autotest_common.sh@855 -- # local i 00:13:55.821 20:20:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:55.821 20:20:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:55.821 20:20:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:13:55.821 20:20:26 -- common/autotest_common.sh@859 -- # break 00:13:55.821 20:20:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:55.821 20:20:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:55.821 20:20:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.821 1+0 records in 00:13:55.821 1+0 records out 00:13:55.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573132 s, 7.1 MB/s 00:13:55.821 20:20:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.821 20:20:26 -- common/autotest_common.sh@872 -- # size=4096 00:13:55.821 20:20:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.080 20:20:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:56.080 20:20:26 -- common/autotest_common.sh@875 -- # return 0 00:13:56.081 20:20:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:56.081 20:20:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:56.081 20:20:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:56.081 20:20:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:56.081 20:20:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:56.081 20:20:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:56.081 20:20:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:13:56.081 20:20:26 -- common/autotest_common.sh@855 -- # local i 00:13:56.081 20:20:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:56.081 20:20:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:56.081 20:20:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:13:56.081 20:20:26 -- common/autotest_common.sh@859 -- # break 00:13:56.081 20:20:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:56.081 20:20:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:56.081 20:20:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.081 1+0 records in 00:13:56.081 1+0 records out 00:13:56.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508666 s, 8.1 MB/s 00:13:56.081 20:20:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.081 20:20:26 -- common/autotest_common.sh@872 -- # size=4096 00:13:56.081 20:20:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.081 20:20:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:56.081 20:20:26 -- common/autotest_common.sh@875 -- # return 0 00:13:56.081 20:20:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:56.081 20:20:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:56.081 20:20:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:56.339 20:20:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:56.339 20:20:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:56.339 20:20:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:56.339 20:20:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:13:56.339 20:20:26 -- common/autotest_common.sh@855 -- # local i 00:13:56.339 20:20:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:56.339 20:20:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:56.339 20:20:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:13:56.339 20:20:26 -- common/autotest_common.sh@859 -- # break 00:13:56.339 20:20:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:56.339 20:20:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:56.339 20:20:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.339 1+0 records in 00:13:56.339 1+0 records out 00:13:56.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505008 s, 8.1 MB/s 00:13:56.339 20:20:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.339 20:20:26 -- common/autotest_common.sh@872 -- # size=4096 00:13:56.339 20:20:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.339 20:20:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:56.339 20:20:26 -- common/autotest_common.sh@875 -- # return 0 00:13:56.339 20:20:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:56.339 20:20:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:56.339 20:20:26 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:56.598 20:20:26 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:56.598 20:20:26 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:56.598 20:20:26 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:56.598 20:20:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:13:56.598 20:20:26 -- common/autotest_common.sh@855 -- # local i 00:13:56.598 20:20:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:56.598 20:20:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:56.598 20:20:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:13:56.598 20:20:26 -- common/autotest_common.sh@859 -- # break 00:13:56.598 20:20:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:56.598 20:20:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:56.598 20:20:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:56.598 1+0 records in 00:13:56.598 1+0 records out 00:13:56.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000797354 s, 5.1 MB/s 00:13:56.598 20:20:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.598 20:20:26 -- common/autotest_common.sh@872 -- # size=4096 00:13:56.598 20:20:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:56.598 20:20:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:56.598 20:20:26 -- common/autotest_common.sh@875 -- # return 0 00:13:56.598 20:20:26 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:56.598 20:20:26 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:56.598 20:20:26 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:56.856 20:20:26 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:56.856 { 00:13:56.856 "nbd_device": "/dev/nbd0", 00:13:56.856 "bdev_name": "nvme0n1" 00:13:56.856 }, 00:13:56.856 { 00:13:56.856 "nbd_device": "/dev/nbd1", 00:13:56.856 "bdev_name": "nvme1n1" 00:13:56.856 }, 00:13:56.856 { 00:13:56.856 "nbd_device": "/dev/nbd2", 00:13:56.856 "bdev_name": "nvme2n1" 00:13:56.856 }, 00:13:56.856 { 00:13:56.856 "nbd_device": "/dev/nbd3", 00:13:56.856 "bdev_name": "nvme2n2" 00:13:56.856 }, 00:13:56.856 { 00:13:56.856 "nbd_device": "/dev/nbd4", 00:13:56.856 "bdev_name": "nvme2n3" 00:13:56.856 }, 00:13:56.856 { 00:13:56.856 "nbd_device": "/dev/nbd5", 00:13:56.856 "bdev_name": "nvme3n1" 00:13:56.856 } 00:13:56.856 ]' 00:13:56.856 20:20:26 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:56.856 20:20:26 -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:56.856 { 00:13:56.856 "nbd_device": "/dev/nbd0", 00:13:56.856 "bdev_name": "nvme0n1" 00:13:56.856 }, 00:13:56.856 { 00:13:56.856 "nbd_device": "/dev/nbd1", 00:13:56.856 "bdev_name": "nvme1n1" 00:13:56.856 }, 00:13:56.856 { 00:13:56.856 "nbd_device": "/dev/nbd2", 00:13:56.856 "bdev_name": "nvme2n1" 00:13:56.856 }, 00:13:56.856 { 00:13:56.856 "nbd_device": "/dev/nbd3", 00:13:56.856 "bdev_name": "nvme2n2" 00:13:56.856 }, 00:13:56.856 { 00:13:56.856 "nbd_device": "/dev/nbd4", 00:13:56.856 "bdev_name": "nvme2n3" 00:13:56.856 }, 00:13:56.856 { 00:13:56.856 "nbd_device": "/dev/nbd5", 00:13:56.856 "bdev_name": "nvme3n1" 00:13:56.856 } 00:13:56.856 ]' 00:13:56.856 20:20:26 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:56.856 20:20:26 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:56.856 20:20:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:56.856 20:20:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:56.856 20:20:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:56.856 20:20:26 -- bdev/nbd_common.sh@51 -- # local i 00:13:56.856 20:20:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.856 20:20:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:57.114 20:20:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:57.114 20:20:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:57.114 20:20:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:57.114 20:20:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.114 20:20:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.114 20:20:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:57.114 20:20:27 -- bdev/nbd_common.sh@41 -- # break 00:13:57.114 20:20:27 -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.114 20:20:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.114 20:20:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@41 -- # break 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@41 -- # break 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.372 20:20:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:57.631 20:20:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:57.631 20:20:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:57.631 20:20:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:57.631 20:20:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.631 20:20:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.631 20:20:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:57.631 20:20:27 -- bdev/nbd_common.sh@41 -- # break 00:13:57.631 20:20:27 -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.631 20:20:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.631 20:20:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:57.889 20:20:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:57.889 20:20:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:57.889 20:20:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:57.889 20:20:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.889 20:20:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.889 20:20:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:57.889 20:20:27 -- bdev/nbd_common.sh@41 -- # break 00:13:57.889 20:20:27 -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.889 20:20:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.889 20:20:27 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:58.148 20:20:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:58.148 20:20:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:58.148 20:20:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:58.148 20:20:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.148 20:20:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.148 20:20:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:58.148 20:20:28 -- bdev/nbd_common.sh@41 -- # break 00:13:58.148 20:20:28 -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.148 20:20:28 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:58.148 20:20:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:58.148 20:20:28 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:58.148 20:20:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:58.148 20:20:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:58.148 20:20:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@65 -- # true 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@65 -- # count=0 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@122 -- # count=0 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@127 -- # return 0 00:13:58.406 20:20:28 -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@12 -- # local i 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:58.406 /dev/nbd0 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:58.406 20:20:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:58.406 20:20:28 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:13:58.406 20:20:28 -- common/autotest_common.sh@855 -- # local i 00:13:58.406 20:20:28 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:58.406 20:20:28 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:58.406 20:20:28 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:13:58.665 20:20:28 -- common/autotest_common.sh@859 -- # break 00:13:58.665 20:20:28 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:58.665 20:20:28 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:58.665 20:20:28 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:58.665 1+0 records in 00:13:58.665 1+0 records out 00:13:58.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061769 s, 6.6 MB/s 00:13:58.665 20:20:28 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.665 20:20:28 -- common/autotest_common.sh@872 -- # size=4096 00:13:58.665 20:20:28 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.665 20:20:28 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:58.665 20:20:28 -- common/autotest_common.sh@875 -- # return 0 00:13:58.665 20:20:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:58.665 20:20:28 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:58.665 20:20:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:58.665 /dev/nbd1 00:13:58.665 20:20:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:58.665 20:20:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:58.665 20:20:28 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:13:58.665 20:20:28 -- common/autotest_common.sh@855 -- # local i 00:13:58.665 20:20:28 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:58.665 20:20:28 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:58.665 20:20:28 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:13:58.665 20:20:28 -- common/autotest_common.sh@859 -- # break 00:13:58.665 20:20:28 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:58.665 20:20:28 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:58.665 20:20:28 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:58.665 1+0 records in 00:13:58.665 1+0 records out 00:13:58.665 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000493762 s, 8.3 MB/s 00:13:58.665 20:20:28 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.665 20:20:28 -- common/autotest_common.sh@872 -- # size=4096 00:13:58.665 20:20:28 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.665 20:20:28 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:58.665 20:20:28 -- common/autotest_common.sh@875 -- # return 0 00:13:58.665 20:20:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:58.665 20:20:28 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:58.665 20:20:28 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:58.924 /dev/nbd10 00:13:58.924 20:20:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:58.924 20:20:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:58.924 20:20:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:13:58.924 20:20:29 -- common/autotest_common.sh@855 -- # local i 00:13:58.924 20:20:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:58.924 20:20:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:58.924 20:20:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:13:58.924 20:20:29 -- common/autotest_common.sh@859 -- # break 00:13:58.924 20:20:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:58.924 20:20:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:58.924 20:20:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:58.924 1+0 records in 00:13:58.924 1+0 records out 00:13:58.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000848942 s, 4.8 MB/s 00:13:58.924 20:20:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.924 20:20:29 -- common/autotest_common.sh@872 -- # size=4096 00:13:58.924 20:20:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:58.924 20:20:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:58.924 20:20:29 -- common/autotest_common.sh@875 -- # return 0 00:13:58.924 20:20:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:58.924 20:20:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:58.924 20:20:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:59.182 /dev/nbd11 00:13:59.182 20:20:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:59.182 20:20:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:59.182 20:20:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:13:59.182 20:20:29 -- common/autotest_common.sh@855 -- # local i 00:13:59.182 20:20:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:59.182 20:20:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:59.182 20:20:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:13:59.182 20:20:29 -- common/autotest_common.sh@859 -- # break 00:13:59.183 20:20:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:59.183 20:20:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:59.183 20:20:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.183 1+0 records in 00:13:59.183 1+0 records out 00:13:59.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811071 s, 5.1 MB/s 00:13:59.183 20:20:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.183 20:20:29 -- common/autotest_common.sh@872 -- # size=4096 00:13:59.183 20:20:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.183 20:20:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:59.183 20:20:29 -- common/autotest_common.sh@875 -- # return 0 00:13:59.183 20:20:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.183 20:20:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:59.183 20:20:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:59.441 /dev/nbd12 00:13:59.441 20:20:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:59.441 20:20:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:59.441 20:20:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:13:59.441 20:20:29 -- common/autotest_common.sh@855 -- # local i 00:13:59.441 20:20:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:59.441 20:20:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:59.441 20:20:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:13:59.441 20:20:29 -- common/autotest_common.sh@859 -- # break 00:13:59.441 20:20:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:59.441 20:20:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:59.441 20:20:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.441 1+0 records in 00:13:59.441 1+0 records out 00:13:59.441 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694432 s, 5.9 MB/s 00:13:59.441 20:20:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.441 20:20:29 -- common/autotest_common.sh@872 -- # size=4096 00:13:59.441 20:20:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.441 20:20:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:59.441 20:20:29 -- common/autotest_common.sh@875 -- # return 0 00:13:59.441 20:20:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.441 20:20:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:59.441 20:20:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:59.700 /dev/nbd13 00:13:59.700 20:20:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:59.700 20:20:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:59.700 20:20:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:13:59.700 20:20:29 -- common/autotest_common.sh@855 -- # local i 00:13:59.700 20:20:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:59.700 20:20:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:59.700 20:20:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:13:59.700 20:20:29 -- common/autotest_common.sh@859 -- # break 00:13:59.700 20:20:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:59.700 20:20:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:59.700 20:20:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.700 1+0 records in 00:13:59.700 1+0 records out 00:13:59.700 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000716466 s, 5.7 MB/s 00:13:59.700 20:20:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.700 20:20:29 -- common/autotest_common.sh@872 -- # size=4096 00:13:59.700 20:20:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.700 20:20:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:59.700 20:20:29 -- common/autotest_common.sh@875 -- # return 0 00:13:59.700 20:20:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.700 20:20:29 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:59.700 20:20:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:59.700 20:20:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:59.700 20:20:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:59.959 20:20:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:59.959 { 00:13:59.959 "nbd_device": "/dev/nbd0", 00:13:59.959 "bdev_name": "nvme0n1" 00:13:59.959 }, 00:13:59.959 { 00:13:59.959 "nbd_device": "/dev/nbd1", 00:13:59.959 "bdev_name": "nvme1n1" 00:13:59.959 }, 00:13:59.959 { 00:13:59.959 "nbd_device": "/dev/nbd10", 00:13:59.959 "bdev_name": "nvme2n1" 00:13:59.959 }, 00:13:59.959 { 00:13:59.959 "nbd_device": "/dev/nbd11", 00:13:59.959 "bdev_name": "nvme2n2" 00:13:59.959 }, 00:13:59.959 { 00:13:59.959 "nbd_device": "/dev/nbd12", 00:13:59.959 "bdev_name": "nvme2n3" 00:13:59.959 }, 00:13:59.959 { 00:13:59.959 "nbd_device": "/dev/nbd13", 00:13:59.959 "bdev_name": "nvme3n1" 00:13:59.959 } 00:13:59.959 ]' 00:13:59.959 20:20:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:59.959 20:20:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:59.959 { 00:13:59.959 "nbd_device": "/dev/nbd0", 00:13:59.959 "bdev_name": "nvme0n1" 00:13:59.959 }, 00:13:59.959 { 00:13:59.959 "nbd_device": "/dev/nbd1", 00:13:59.959 "bdev_name": "nvme1n1" 00:13:59.959 }, 00:13:59.959 { 00:13:59.959 "nbd_device": "/dev/nbd10", 00:13:59.959 "bdev_name": "nvme2n1" 00:13:59.959 }, 00:13:59.959 { 00:13:59.959 "nbd_device": "/dev/nbd11", 00:13:59.959 "bdev_name": "nvme2n2" 00:13:59.959 }, 00:13:59.959 { 00:13:59.959 "nbd_device": "/dev/nbd12", 00:13:59.959 "bdev_name": "nvme2n3" 00:13:59.959 }, 00:13:59.959 { 00:13:59.959 "nbd_device": "/dev/nbd13", 00:13:59.959 "bdev_name": "nvme3n1" 00:13:59.959 } 00:13:59.959 ]' 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:59.959 /dev/nbd1 00:13:59.959 /dev/nbd10 00:13:59.959 /dev/nbd11 00:13:59.959 /dev/nbd12 00:13:59.959 /dev/nbd13' 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:59.959 /dev/nbd1 00:13:59.959 /dev/nbd10 00:13:59.959 /dev/nbd11 00:13:59.959 /dev/nbd12 00:13:59.959 /dev/nbd13' 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@65 -- # count=6 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@66 -- # echo 6 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@95 -- # count=6 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:59.959 256+0 records in 00:13:59.959 256+0 records out 00:13:59.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116569 s, 90.0 MB/s 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:59.959 256+0 records in 00:13:59.959 256+0 records out 00:13:59.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118321 s, 8.9 MB/s 00:13:59.959 20:20:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:59.960 20:20:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:00.217 256+0 records in 00:14:00.217 256+0 records out 00:14:00.217 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148705 s, 7.1 MB/s 00:14:00.217 20:20:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:00.217 20:20:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:00.476 256+0 records in 00:14:00.476 256+0 records out 00:14:00.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120782 s, 8.7 MB/s 00:14:00.476 20:20:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:00.476 20:20:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:00.476 256+0 records in 00:14:00.476 256+0 records out 00:14:00.476 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121888 s, 8.6 MB/s 00:14:00.476 20:20:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:00.476 20:20:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:00.734 256+0 records in 00:14:00.734 256+0 records out 00:14:00.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122571 s, 8.6 MB/s 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:00.735 256+0 records in 00:14:00.735 256+0 records out 00:14:00.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.11962 s, 8.8 MB/s 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@51 -- # local i 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.735 20:20:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:00.993 20:20:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:00.993 20:20:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:00.993 20:20:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:00.993 20:20:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.993 20:20:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.993 20:20:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:00.993 20:20:31 -- bdev/nbd_common.sh@41 -- # break 00:14:00.993 20:20:31 -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.993 20:20:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.993 20:20:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:01.252 20:20:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:01.252 20:20:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:01.252 20:20:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:01.252 20:20:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.252 20:20:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.252 20:20:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:01.252 20:20:31 -- bdev/nbd_common.sh@41 -- # break 00:14:01.252 20:20:31 -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.252 20:20:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.252 20:20:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:01.510 20:20:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:01.510 20:20:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:01.510 20:20:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:01.510 20:20:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.510 20:20:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.510 20:20:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:01.510 20:20:31 -- bdev/nbd_common.sh@41 -- # break 00:14:01.510 20:20:31 -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.510 20:20:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.510 20:20:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:01.510 20:20:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@41 -- # break 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:01.768 20:20:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:01.769 20:20:31 -- bdev/nbd_common.sh@41 -- # break 00:14:01.769 20:20:31 -- bdev/nbd_common.sh@45 -- # return 0 00:14:01.769 20:20:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:01.769 20:20:31 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:02.027 20:20:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:02.027 20:20:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:02.027 20:20:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:02.027 20:20:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:02.027 20:20:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:02.027 20:20:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:02.027 20:20:32 -- bdev/nbd_common.sh@41 -- # break 00:14:02.027 20:20:32 -- bdev/nbd_common.sh@45 -- # return 0 00:14:02.027 20:20:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:02.027 20:20:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:02.027 20:20:32 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@65 -- # true 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@65 -- # count=0 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@104 -- # count=0 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@109 -- # return 0 00:14:02.287 20:20:32 -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:14:02.287 20:20:32 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:02.546 malloc_lvol_verify 00:14:02.546 20:20:32 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:02.804 f62db993-b083-4a58-8eab-1d3919461a63 00:14:02.804 20:20:32 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:02.804 5dc0f674-183d-4257-819b-4cc950bcf1d3 00:14:02.804 20:20:33 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:03.062 /dev/nbd0 00:14:03.062 20:20:33 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:14:03.062 mke2fs 1.46.5 (30-Dec-2021) 00:14:03.062 Discarding device blocks: 0/4096 done 00:14:03.062 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:03.062 00:14:03.062 Allocating group tables: 0/1 done 00:14:03.062 Writing inode tables: 0/1 done 00:14:03.062 Creating journal (1024 blocks): done 00:14:03.062 Writing superblocks and filesystem accounting information: 0/1 done 00:14:03.062 00:14:03.062 20:20:33 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:14:03.062 20:20:33 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:03.062 20:20:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:03.062 20:20:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:03.062 20:20:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:03.062 20:20:33 -- bdev/nbd_common.sh@51 -- # local i 00:14:03.062 20:20:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:03.062 20:20:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:03.321 20:20:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:03.321 20:20:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:03.321 20:20:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:03.321 20:20:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:03.321 20:20:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:03.321 20:20:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:03.321 20:20:33 -- bdev/nbd_common.sh@41 -- # break 00:14:03.321 20:20:33 -- bdev/nbd_common.sh@45 -- # return 0 00:14:03.321 20:20:33 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:14:03.321 20:20:33 -- bdev/nbd_common.sh@147 -- # return 0 00:14:03.321 20:20:33 -- bdev/blockdev.sh@326 -- # killprocess 74184 00:14:03.321 20:20:33 -- common/autotest_common.sh@936 -- # '[' -z 74184 ']' 00:14:03.321 20:20:33 -- common/autotest_common.sh@940 -- # kill -0 74184 00:14:03.321 20:20:33 -- common/autotest_common.sh@941 -- # uname 00:14:03.321 20:20:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:03.321 20:20:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74184 00:14:03.321 killing process with pid 74184 00:14:03.321 20:20:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:03.321 20:20:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:03.321 20:20:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74184' 00:14:03.321 20:20:33 -- common/autotest_common.sh@955 -- # kill 74184 00:14:03.321 20:20:33 -- common/autotest_common.sh@960 -- # wait 74184 00:14:04.698 20:20:34 -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:14:04.698 00:14:04.698 real 0m10.615s 00:14:04.698 user 0m13.553s 00:14:04.698 sys 0m4.295s 00:14:04.698 20:20:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:04.698 20:20:34 -- common/autotest_common.sh@10 -- # set +x 00:14:04.698 ************************************ 00:14:04.698 END TEST bdev_nbd 00:14:04.698 ************************************ 00:14:04.698 20:20:34 -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:14:04.698 20:20:34 -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:14:04.698 20:20:34 -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:14:04.698 20:20:34 -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:14:04.698 20:20:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:04.698 20:20:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:04.698 20:20:34 -- common/autotest_common.sh@10 -- # set +x 00:14:04.956 ************************************ 00:14:04.956 START TEST bdev_fio 00:14:04.956 ************************************ 00:14:04.956 20:20:35 -- common/autotest_common.sh@1111 -- # fio_test_suite '' 00:14:04.956 20:20:35 -- bdev/blockdev.sh@331 -- # local env_context 00:14:04.956 20:20:35 -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:04.956 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:04.956 20:20:35 -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:04.956 20:20:35 -- bdev/blockdev.sh@339 -- # echo '' 00:14:04.956 20:20:35 -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:14:04.956 20:20:35 -- bdev/blockdev.sh@339 -- # env_context= 00:14:04.956 20:20:35 -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:04.956 20:20:35 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:04.956 20:20:35 -- common/autotest_common.sh@1267 -- # local workload=verify 00:14:04.956 20:20:35 -- common/autotest_common.sh@1268 -- # local bdev_type=AIO 00:14:04.956 20:20:35 -- common/autotest_common.sh@1269 -- # local env_context= 00:14:04.956 20:20:35 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:14:04.956 20:20:35 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:04.956 20:20:35 -- common/autotest_common.sh@1277 -- # '[' -z verify ']' 00:14:04.956 20:20:35 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:14:04.956 20:20:35 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:04.956 20:20:35 -- common/autotest_common.sh@1287 -- # cat 00:14:04.956 20:20:35 -- common/autotest_common.sh@1299 -- # '[' verify == verify ']' 00:14:04.956 20:20:35 -- common/autotest_common.sh@1300 -- # cat 00:14:04.956 20:20:35 -- common/autotest_common.sh@1309 -- # '[' AIO == AIO ']' 00:14:04.956 20:20:35 -- common/autotest_common.sh@1310 -- # /usr/src/fio/fio --version 00:14:04.956 20:20:35 -- common/autotest_common.sh@1310 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:04.956 20:20:35 -- common/autotest_common.sh@1311 -- # echo serialize_overlap=1 00:14:04.956 20:20:35 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:04.956 20:20:35 -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:14:04.956 20:20:35 -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:14:04.956 20:20:35 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:04.956 20:20:35 -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:14:04.956 20:20:35 -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:14:04.956 20:20:35 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:04.956 20:20:35 -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:14:04.956 20:20:35 -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:14:04.956 20:20:35 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:04.956 20:20:35 -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n2]' 00:14:04.956 20:20:35 -- bdev/blockdev.sh@343 -- # echo filename=nvme2n2 00:14:04.956 20:20:35 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:04.956 20:20:35 -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n3]' 00:14:04.956 20:20:35 -- bdev/blockdev.sh@343 -- # echo filename=nvme2n3 00:14:04.956 20:20:35 -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:14:04.956 20:20:35 -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:14:04.956 20:20:35 -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:14:04.956 20:20:35 -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:04.956 20:20:35 -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:04.956 20:20:35 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:14:04.956 20:20:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:04.956 20:20:35 -- common/autotest_common.sh@10 -- # set +x 00:14:04.956 ************************************ 00:14:04.956 START TEST bdev_fio_rw_verify 00:14:04.956 ************************************ 00:14:04.957 20:20:35 -- common/autotest_common.sh@1111 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:04.957 20:20:35 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:04.957 20:20:35 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:14:04.957 20:20:35 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:04.957 20:20:35 -- common/autotest_common.sh@1325 -- # local sanitizers 00:14:04.957 20:20:35 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:04.957 20:20:35 -- common/autotest_common.sh@1327 -- # shift 00:14:04.957 20:20:35 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:14:04.957 20:20:35 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:14:04.957 20:20:35 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:04.957 20:20:35 -- common/autotest_common.sh@1331 -- # grep libasan 00:14:04.957 20:20:35 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:14:05.215 20:20:35 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:05.215 20:20:35 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:05.215 20:20:35 -- common/autotest_common.sh@1333 -- # break 00:14:05.215 20:20:35 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:05.215 20:20:35 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:05.215 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:05.215 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:05.215 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:05.215 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:05.215 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:05.215 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:05.215 fio-3.35 00:14:05.215 Starting 6 threads 00:14:17.420 00:14:17.420 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74600: Wed Apr 24 20:20:46 2024 00:14:17.420 read: IOPS=33.1k, BW=129MiB/s (136MB/s)(1293MiB/10001msec) 00:14:17.420 slat (usec): min=2, max=732, avg= 6.16, stdev= 4.98 00:14:17.420 clat (usec): min=84, max=3932, avg=556.78, stdev=230.00 00:14:17.420 lat (usec): min=88, max=3962, avg=562.94, stdev=230.77 00:14:17.420 clat percentiles (usec): 00:14:17.420 | 50.000th=[ 562], 99.000th=[ 1221], 99.900th=[ 2147], 99.990th=[ 3752], 00:14:17.420 | 99.999th=[ 3916] 00:14:17.420 write: IOPS=33.4k, BW=130MiB/s (137MB/s)(1304MiB/10001msec); 0 zone resets 00:14:17.420 slat (usec): min=10, max=1335, avg=24.25, stdev=33.39 00:14:17.420 clat (usec): min=74, max=3830, avg=647.44, stdev=250.40 00:14:17.420 lat (usec): min=90, max=3847, avg=671.70, stdev=255.86 00:14:17.420 clat percentiles (usec): 00:14:17.420 | 50.000th=[ 635], 99.000th=[ 1450], 99.900th=[ 2114], 99.990th=[ 2900], 00:14:17.420 | 99.999th=[ 3785] 00:14:17.420 bw ( KiB/s): min=104511, max=163821, per=100.00%, avg=133749.79, stdev=2649.40, samples=114 00:14:17.420 iops : min=26127, max=40955, avg=33437.00, stdev=662.35, samples=114 00:14:17.420 lat (usec) : 100=0.01%, 250=5.54%, 500=27.41%, 750=46.28%, 1000=15.67% 00:14:17.420 lat (msec) : 2=4.96%, 4=0.14% 00:14:17.420 cpu : usr=55.18%, sys=30.09%, ctx=8876, majf=0, minf=27492 00:14:17.420 IO depths : 1=11.9%, 2=24.4%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:17.420 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.420 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.420 issued rwts: total=330941,333877,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.420 latency : target=0, window=0, percentile=100.00%, depth=8 00:14:17.420 00:14:17.420 Run status group 0 (all jobs): 00:14:17.420 READ: bw=129MiB/s (136MB/s), 129MiB/s-129MiB/s (136MB/s-136MB/s), io=1293MiB (1356MB), run=10001-10001msec 00:14:17.420 WRITE: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=1304MiB (1368MB), run=10001-10001msec 00:14:17.679 ----------------------------------------------------- 00:14:17.679 Suppressions used: 00:14:17.679 count bytes template 00:14:17.679 6 48 /usr/src/fio/parse.c 00:14:17.679 2689 258144 /usr/src/fio/iolog.c 00:14:17.680 1 8 libtcmalloc_minimal.so 00:14:17.680 1 904 libcrypto.so 00:14:17.680 ----------------------------------------------------- 00:14:17.680 00:14:17.680 00:14:17.680 real 0m12.656s 00:14:17.680 user 0m35.260s 00:14:17.680 sys 0m18.468s 00:14:17.680 20:20:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:17.680 20:20:47 -- common/autotest_common.sh@10 -- # set +x 00:14:17.680 ************************************ 00:14:17.680 END TEST bdev_fio_rw_verify 00:14:17.680 ************************************ 00:14:17.680 20:20:47 -- bdev/blockdev.sh@350 -- # rm -f 00:14:17.680 20:20:47 -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:17.680 20:20:47 -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:14:17.680 20:20:47 -- common/autotest_common.sh@1266 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:17.680 20:20:47 -- common/autotest_common.sh@1267 -- # local workload=trim 00:14:17.680 20:20:47 -- common/autotest_common.sh@1268 -- # local bdev_type= 00:14:17.680 20:20:47 -- common/autotest_common.sh@1269 -- # local env_context= 00:14:17.680 20:20:47 -- common/autotest_common.sh@1270 -- # local fio_dir=/usr/src/fio 00:14:17.680 20:20:47 -- common/autotest_common.sh@1272 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:17.680 20:20:47 -- common/autotest_common.sh@1277 -- # '[' -z trim ']' 00:14:17.680 20:20:47 -- common/autotest_common.sh@1281 -- # '[' -n '' ']' 00:14:17.680 20:20:47 -- common/autotest_common.sh@1285 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:17.680 20:20:47 -- common/autotest_common.sh@1287 -- # cat 00:14:17.680 20:20:47 -- common/autotest_common.sh@1299 -- # '[' trim == verify ']' 00:14:17.680 20:20:47 -- common/autotest_common.sh@1314 -- # '[' trim == trim ']' 00:14:17.680 20:20:47 -- common/autotest_common.sh@1315 -- # echo rw=trimwrite 00:14:17.680 20:20:47 -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "8d04516f-d232-4850-abe0-429b42069f0b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8d04516f-d232-4850-abe0-429b42069f0b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "03ede18d-8ec9-42dd-bdd8-9207ddc94bf7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "03ede18d-8ec9-42dd-bdd8-9207ddc94bf7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "dc455350-0ac9-47fc-9499-784b6f5ee9f3"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dc455350-0ac9-47fc-9499-784b6f5ee9f3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "7d1408e2-09fa-4da3-9e17-72f1b800d998"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "7d1408e2-09fa-4da3-9e17-72f1b800d998",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "966cd3dd-8832-4310-b63c-c2dfc1118261"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "966cd3dd-8832-4310-b63c-c2dfc1118261",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "0833a153-712e-40d3-8253-8740a39feec9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0833a153-712e-40d3-8253-8740a39feec9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:14:17.680 20:20:47 -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:14:17.940 20:20:47 -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:14:17.940 20:20:47 -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:17.940 20:20:47 -- bdev/blockdev.sh@362 -- # popd 00:14:17.940 /home/vagrant/spdk_repo/spdk 00:14:17.940 20:20:47 -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:14:17.940 20:20:47 -- bdev/blockdev.sh@364 -- # return 0 00:14:17.940 00:14:17.940 real 0m12.949s 00:14:17.940 user 0m35.392s 00:14:17.940 sys 0m18.616s 00:14:17.940 20:20:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:17.940 20:20:47 -- common/autotest_common.sh@10 -- # set +x 00:14:17.940 ************************************ 00:14:17.940 END TEST bdev_fio 00:14:17.940 ************************************ 00:14:17.940 20:20:48 -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:17.940 20:20:48 -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:17.940 20:20:48 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:14:17.940 20:20:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:17.940 20:20:48 -- common/autotest_common.sh@10 -- # set +x 00:14:17.940 ************************************ 00:14:17.940 START TEST bdev_verify 00:14:17.940 ************************************ 00:14:17.940 20:20:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:14:18.199 [2024-04-24 20:20:48.191377] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:14:18.199 [2024-04-24 20:20:48.191507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74777 ] 00:14:18.199 [2024-04-24 20:20:48.364321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:18.457 [2024-04-24 20:20:48.611507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.457 [2024-04-24 20:20:48.611541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.025 Running I/O for 5 seconds... 00:14:24.294 00:14:24.294 Latency(us) 00:14:24.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.294 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.294 Verification LBA range: start 0x0 length 0xa0000 00:14:24.294 nvme0n1 : 5.03 1730.70 6.76 0.00 0.00 73803.82 10212.04 72431.76 00:14:24.294 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.294 Verification LBA range: start 0xa0000 length 0xa0000 00:14:24.294 nvme0n1 : 5.05 1697.18 6.63 0.00 0.00 75322.23 12580.81 66115.03 00:14:24.294 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.294 Verification LBA range: start 0x0 length 0xbd0bd 00:14:24.294 nvme1n1 : 5.06 2805.39 10.96 0.00 0.00 45397.46 6632.56 61903.88 00:14:24.294 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.294 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:14:24.294 nvme1n1 : 5.06 2738.30 10.70 0.00 0.00 46553.72 6606.24 65693.92 00:14:24.294 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.294 Verification LBA range: start 0x0 length 0x80000 00:14:24.294 nvme2n1 : 5.07 1742.81 6.81 0.00 0.00 73015.12 12317.61 65272.80 00:14:24.294 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.294 Verification LBA range: start 0x80000 length 0x80000 00:14:24.294 nvme2n1 : 5.07 1717.34 6.71 0.00 0.00 74159.15 6843.12 65693.92 00:14:24.294 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.294 Verification LBA range: start 0x0 length 0x80000 00:14:24.294 nvme2n2 : 5.07 1741.66 6.80 0.00 0.00 72945.56 5500.81 69483.95 00:14:24.294 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.294 Verification LBA range: start 0x80000 length 0x80000 00:14:24.294 nvme2n2 : 5.06 1718.59 6.71 0.00 0.00 73965.76 6843.12 66115.03 00:14:24.294 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.294 Verification LBA range: start 0x0 length 0x80000 00:14:24.294 nvme2n3 : 5.07 1741.21 6.80 0.00 0.00 72814.43 6000.89 74116.22 00:14:24.294 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.294 Verification LBA range: start 0x80000 length 0x80000 00:14:24.294 nvme2n3 : 5.07 1718.12 6.71 0.00 0.00 73888.04 7527.43 68220.61 00:14:24.294 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:14:24.294 Verification LBA range: start 0x0 length 0x20000 00:14:24.294 nvme3n1 : 5.07 1740.79 6.80 0.00 0.00 72721.04 6527.28 76221.79 00:14:24.294 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:24.294 Verification LBA range: start 0x20000 length 0x20000 00:14:24.295 nvme3n1 : 5.06 1695.18 6.62 0.00 0.00 74802.53 10317.31 73273.99 00:14:24.295 =================================================================================================================== 00:14:24.295 Total : 22787.26 89.01 0.00 0.00 66980.34 5500.81 76221.79 00:14:25.671 00:14:25.671 real 0m7.444s 00:14:25.671 user 0m11.103s 00:14:25.671 sys 0m2.116s 00:14:25.671 20:20:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:25.671 20:20:55 -- common/autotest_common.sh@10 -- # set +x 00:14:25.671 ************************************ 00:14:25.671 END TEST bdev_verify 00:14:25.671 ************************************ 00:14:25.671 20:20:55 -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:25.671 20:20:55 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:14:25.671 20:20:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:25.671 20:20:55 -- common/autotest_common.sh@10 -- # set +x 00:14:25.671 ************************************ 00:14:25.671 START TEST bdev_verify_big_io 00:14:25.671 ************************************ 00:14:25.672 20:20:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:14:25.672 [2024-04-24 20:20:55.785298] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:14:25.672 [2024-04-24 20:20:55.785430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74890 ] 00:14:25.929 [2024-04-24 20:20:55.956284] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:26.188 [2024-04-24 20:20:56.193382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.188 [2024-04-24 20:20:56.193413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.754 Running I/O for 5 seconds... 00:14:33.314 00:14:33.314 Latency(us) 00:14:33.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.314 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:33.314 Verification LBA range: start 0x0 length 0xa000 00:14:33.314 nvme0n1 : 5.61 156.79 9.80 0.00 0.00 801646.69 28846.37 1610343.22 00:14:33.314 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:33.314 Verification LBA range: start 0xa000 length 0xa000 00:14:33.314 nvme0n1 : 5.62 165.15 10.32 0.00 0.00 762470.66 29478.04 1522751.33 00:14:33.314 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:33.314 Verification LBA range: start 0x0 length 0xbd0b 00:14:33.314 nvme1n1 : 5.62 204.99 12.81 0.00 0.00 600393.29 10369.95 616512.15 00:14:33.314 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:33.314 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:33.314 nvme1n1 : 5.61 216.77 13.55 0.00 0.00 558717.23 10791.07 640094.59 00:14:33.314 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:33.314 Verification LBA range: start 0x0 length 0x8000 00:14:33.314 nvme2n1 : 5.60 125.72 7.86 0.00 0.00 965709.69 22845.48 2290864.84 00:14:33.314 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:33.314 Verification LBA range: start 0x8000 length 0x8000 00:14:33.314 nvme2n1 : 5.62 167.94 10.50 0.00 0.00 720225.91 16423.48 1347567.55 00:14:33.314 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:33.314 Verification LBA range: start 0x0 length 0x8000 00:14:33.314 nvme2n2 : 5.60 165.58 10.35 0.00 0.00 725833.10 26003.84 1408208.09 00:14:33.314 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:33.314 Verification LBA range: start 0x8000 length 0x8000 00:14:33.314 nvme2n2 : 5.61 133.96 8.37 0.00 0.00 895064.93 14212.63 2061778.35 00:14:33.314 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:33.314 Verification LBA range: start 0x0 length 0x8000 00:14:33.314 nvme2n3 : 5.62 170.74 10.67 0.00 0.00 698122.46 14317.91 950035.12 00:14:33.314 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:33.314 Verification LBA range: start 0x8000 length 0x8000 00:14:33.314 nvme2n3 : 5.62 122.53 7.66 0.00 0.00 968734.09 19055.45 2183059.43 00:14:33.314 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:33.314 Verification LBA range: start 0x0 length 0x2000 00:14:33.314 nvme3n1 : 5.61 176.82 11.05 0.00 0.00 665305.65 20213.51 1266713.50 00:14:33.314 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:33.314 Verification LBA range: start 0x2000 length 0x2000 00:14:33.314 nvme3n1 : 5.63 171.12 10.70 0.00 0.00 684671.76 9948.84 1792264.84 00:14:33.314 =================================================================================================================== 00:14:33.314 Total : 1978.12 123.63 0.00 0.00 733477.44 9948.84 2290864.84 00:14:33.878 00:14:33.878 real 0m8.375s 00:14:33.878 user 0m14.886s 00:14:33.878 sys 0m0.612s 00:14:33.878 20:21:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:33.878 ************************************ 00:14:33.878 END TEST bdev_verify_big_io 00:14:33.878 ************************************ 00:14:33.878 20:21:04 -- common/autotest_common.sh@10 -- # set +x 00:14:34.136 20:21:04 -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:34.136 20:21:04 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:14:34.136 20:21:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:34.136 20:21:04 -- common/autotest_common.sh@10 -- # set +x 00:14:34.136 ************************************ 00:14:34.136 START TEST bdev_write_zeroes 00:14:34.136 ************************************ 00:14:34.136 20:21:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:34.136 [2024-04-24 20:21:04.321487] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:14:34.136 [2024-04-24 20:21:04.321617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75011 ] 00:14:34.393 [2024-04-24 20:21:04.489358] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.651 [2024-04-24 20:21:04.731699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.216 Running I/O for 1 seconds... 00:14:36.152 00:14:36.152 Latency(us) 00:14:36.152 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.152 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:36.152 nvme0n1 : 1.02 7376.99 28.82 0.00 0.00 17336.03 9211.89 30320.27 00:14:36.152 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:36.152 nvme1n1 : 1.02 12103.33 47.28 0.00 0.00 10556.74 3947.95 22845.48 00:14:36.152 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:36.152 nvme2n1 : 1.03 7362.87 28.76 0.00 0.00 17257.21 6290.40 31373.06 00:14:36.152 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:36.152 nvme2n2 : 1.03 7348.57 28.71 0.00 0.00 17279.35 7316.87 31162.50 00:14:36.152 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:36.152 nvme2n3 : 1.03 7334.48 28.65 0.00 0.00 17300.07 7474.79 30951.94 00:14:36.152 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:36.152 nvme3n1 : 1.03 7320.35 28.60 0.00 0.00 17326.18 7737.99 31162.50 00:14:36.152 =================================================================================================================== 00:14:36.152 Total : 48846.58 190.81 0.00 0.00 15637.69 3947.95 31373.06 00:14:37.528 00:14:37.528 real 0m3.336s 00:14:37.528 user 0m2.584s 00:14:37.528 sys 0m0.565s 00:14:37.528 20:21:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:37.528 ************************************ 00:14:37.528 END TEST bdev_write_zeroes 00:14:37.528 ************************************ 00:14:37.528 20:21:07 -- common/autotest_common.sh@10 -- # set +x 00:14:37.528 20:21:07 -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:37.528 20:21:07 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:14:37.528 20:21:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:37.528 20:21:07 -- common/autotest_common.sh@10 -- # set +x 00:14:37.528 ************************************ 00:14:37.528 START TEST bdev_json_nonenclosed 00:14:37.528 ************************************ 00:14:37.528 20:21:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:37.787 [2024-04-24 20:21:07.820730] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:14:37.787 [2024-04-24 20:21:07.820909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75076 ] 00:14:37.787 [2024-04-24 20:21:08.001088] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.046 [2024-04-24 20:21:08.242620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.046 [2024-04-24 20:21:08.242725] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:38.046 [2024-04-24 20:21:08.242749] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:38.046 [2024-04-24 20:21:08.242762] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:38.613 00:14:38.613 real 0m0.976s 00:14:38.613 user 0m0.727s 00:14:38.613 sys 0m0.141s 00:14:38.613 20:21:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:38.613 20:21:08 -- common/autotest_common.sh@10 -- # set +x 00:14:38.613 ************************************ 00:14:38.613 END TEST bdev_json_nonenclosed 00:14:38.613 ************************************ 00:14:38.613 20:21:08 -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:38.613 20:21:08 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:14:38.613 20:21:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:38.613 20:21:08 -- common/autotest_common.sh@10 -- # set +x 00:14:38.872 ************************************ 00:14:38.872 START TEST bdev_json_nonarray 00:14:38.872 ************************************ 00:14:38.872 20:21:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:38.872 [2024-04-24 20:21:08.943213] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:14:38.872 [2024-04-24 20:21:08.943326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75111 ] 00:14:39.131 [2024-04-24 20:21:09.111376] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.131 [2024-04-24 20:21:09.350353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.131 [2024-04-24 20:21:09.350465] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:39.131 [2024-04-24 20:21:09.350491] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:39.131 [2024-04-24 20:21:09.350504] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:39.699 00:14:39.699 real 0m0.940s 00:14:39.699 user 0m0.699s 00:14:39.699 sys 0m0.136s 00:14:39.699 20:21:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:39.699 ************************************ 00:14:39.699 END TEST bdev_json_nonarray 00:14:39.699 ************************************ 00:14:39.699 20:21:09 -- common/autotest_common.sh@10 -- # set +x 00:14:39.699 20:21:09 -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:14:39.699 20:21:09 -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:14:39.699 20:21:09 -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:14:39.699 20:21:09 -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:14:39.699 20:21:09 -- bdev/blockdev.sh@811 -- # cleanup 00:14:39.699 20:21:09 -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:39.699 20:21:09 -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:39.699 20:21:09 -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:39.699 20:21:09 -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:39.699 20:21:09 -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:39.699 20:21:09 -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:39.699 20:21:09 -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:40.634 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:41.199 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:41.199 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:41.199 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:41.457 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:41.457 00:14:41.457 real 1m3.587s 00:14:41.457 user 1m39.488s 00:14:41.457 sys 0m30.884s 00:14:41.457 20:21:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:41.457 20:21:11 -- common/autotest_common.sh@10 -- # set +x 00:14:41.457 ************************************ 00:14:41.457 END TEST blockdev_xnvme 00:14:41.457 ************************************ 00:14:41.457 20:21:11 -- spdk/autotest.sh@249 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:41.457 20:21:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:41.457 20:21:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:41.457 20:21:11 -- common/autotest_common.sh@10 -- # set +x 00:14:41.714 ************************************ 00:14:41.714 START TEST ublk 00:14:41.714 ************************************ 00:14:41.714 20:21:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:41.714 * Looking for test storage... 00:14:41.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:41.714 20:21:11 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:41.714 20:21:11 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:41.714 20:21:11 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:41.714 20:21:11 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:41.714 20:21:11 -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:41.714 20:21:11 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:41.714 20:21:11 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:41.714 20:21:11 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:41.714 20:21:11 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:41.714 20:21:11 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:41.714 20:21:11 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:41.715 20:21:11 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:41.715 20:21:11 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:41.715 20:21:11 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:41.715 20:21:11 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:41.715 20:21:11 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:41.715 20:21:11 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:41.715 20:21:11 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:41.715 20:21:11 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:41.715 20:21:11 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:41.715 20:21:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:41.715 20:21:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:41.715 20:21:11 -- common/autotest_common.sh@10 -- # set +x 00:14:41.973 ************************************ 00:14:41.973 START TEST test_save_ublk_config 00:14:41.973 ************************************ 00:14:41.973 20:21:11 -- common/autotest_common.sh@1111 -- # test_save_config 00:14:41.973 20:21:11 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:41.973 20:21:11 -- ublk/ublk.sh@103 -- # tgtpid=75402 00:14:41.973 20:21:11 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:41.973 20:21:11 -- ublk/ublk.sh@106 -- # waitforlisten 75402 00:14:41.973 20:21:11 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:41.973 20:21:11 -- common/autotest_common.sh@817 -- # '[' -z 75402 ']' 00:14:41.973 20:21:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.973 20:21:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:41.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.973 20:21:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.973 20:21:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:41.973 20:21:11 -- common/autotest_common.sh@10 -- # set +x 00:14:41.973 [2024-04-24 20:21:12.067430] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:14:41.973 [2024-04-24 20:21:12.067534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75402 ] 00:14:42.230 [2024-04-24 20:21:12.240750] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.488 [2024-04-24 20:21:12.491616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:43.421 20:21:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:43.421 20:21:13 -- common/autotest_common.sh@850 -- # return 0 00:14:43.421 20:21:13 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:43.421 20:21:13 -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:43.421 20:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.421 20:21:13 -- common/autotest_common.sh@10 -- # set +x 00:14:43.421 [2024-04-24 20:21:13.454234] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:43.421 malloc0 00:14:43.421 [2024-04-24 20:21:13.549014] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:43.421 [2024-04-24 20:21:13.549105] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:43.421 [2024-04-24 20:21:13.549115] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:43.421 [2024-04-24 20:21:13.549126] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:43.421 [2024-04-24 20:21:13.557970] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:43.421 [2024-04-24 20:21:13.558000] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:43.421 [2024-04-24 20:21:13.564885] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:43.421 [2024-04-24 20:21:13.564989] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:43.421 [2024-04-24 20:21:13.581878] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:43.421 0 00:14:43.421 20:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.421 20:21:13 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:43.421 20:21:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:43.421 20:21:13 -- common/autotest_common.sh@10 -- # set +x 00:14:43.679 20:21:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:43.679 20:21:13 -- ublk/ublk.sh@115 -- # config='{ 00:14:43.679 "subsystems": [ 00:14:43.679 { 00:14:43.679 "subsystem": "keyring", 00:14:43.679 "config": [] 00:14:43.679 }, 00:14:43.679 { 00:14:43.679 "subsystem": "iobuf", 00:14:43.679 "config": [ 00:14:43.679 { 00:14:43.679 "method": "iobuf_set_options", 00:14:43.679 "params": { 00:14:43.679 "small_pool_count": 8192, 00:14:43.679 "large_pool_count": 1024, 00:14:43.679 "small_bufsize": 8192, 00:14:43.679 "large_bufsize": 135168 00:14:43.679 } 00:14:43.679 } 00:14:43.679 ] 00:14:43.679 }, 00:14:43.679 { 00:14:43.679 "subsystem": "sock", 00:14:43.679 "config": [ 00:14:43.679 { 00:14:43.679 "method": "sock_impl_set_options", 00:14:43.679 "params": { 00:14:43.679 "impl_name": "posix", 00:14:43.679 "recv_buf_size": 2097152, 00:14:43.679 "send_buf_size": 2097152, 00:14:43.679 "enable_recv_pipe": true, 00:14:43.679 "enable_quickack": false, 00:14:43.679 "enable_placement_id": 0, 00:14:43.679 "enable_zerocopy_send_server": true, 00:14:43.679 "enable_zerocopy_send_client": false, 00:14:43.679 "zerocopy_threshold": 0, 00:14:43.679 "tls_version": 0, 00:14:43.679 "enable_ktls": false 00:14:43.679 } 00:14:43.679 }, 00:14:43.679 { 00:14:43.679 "method": "sock_impl_set_options", 00:14:43.679 "params": { 00:14:43.679 "impl_name": "ssl", 00:14:43.679 "recv_buf_size": 4096, 00:14:43.679 "send_buf_size": 4096, 00:14:43.679 "enable_recv_pipe": true, 00:14:43.679 "enable_quickack": false, 00:14:43.679 "enable_placement_id": 0, 00:14:43.679 "enable_zerocopy_send_server": true, 00:14:43.679 "enable_zerocopy_send_client": false, 00:14:43.679 "zerocopy_threshold": 0, 00:14:43.679 "tls_version": 0, 00:14:43.679 "enable_ktls": false 00:14:43.679 } 00:14:43.679 } 00:14:43.679 ] 00:14:43.679 }, 00:14:43.679 { 00:14:43.679 "subsystem": "vmd", 00:14:43.679 "config": [] 00:14:43.679 }, 00:14:43.679 { 00:14:43.679 "subsystem": "accel", 00:14:43.679 "config": [ 00:14:43.679 { 00:14:43.679 "method": "accel_set_options", 00:14:43.679 "params": { 00:14:43.679 "small_cache_size": 128, 00:14:43.679 "large_cache_size": 16, 00:14:43.679 "task_count": 2048, 00:14:43.679 "sequence_count": 2048, 00:14:43.679 "buf_count": 2048 00:14:43.679 } 00:14:43.679 } 00:14:43.679 ] 00:14:43.679 }, 00:14:43.679 { 00:14:43.679 "subsystem": "bdev", 00:14:43.679 "config": [ 00:14:43.679 { 00:14:43.679 "method": "bdev_set_options", 00:14:43.679 "params": { 00:14:43.679 "bdev_io_pool_size": 65535, 00:14:43.679 "bdev_io_cache_size": 256, 00:14:43.679 "bdev_auto_examine": true, 00:14:43.679 "iobuf_small_cache_size": 128, 00:14:43.680 "iobuf_large_cache_size": 16 00:14:43.680 } 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "method": "bdev_raid_set_options", 00:14:43.680 "params": { 00:14:43.680 "process_window_size_kb": 1024 00:14:43.680 } 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "method": "bdev_iscsi_set_options", 00:14:43.680 "params": { 00:14:43.680 "timeout_sec": 30 00:14:43.680 } 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "method": "bdev_nvme_set_options", 00:14:43.680 "params": { 00:14:43.680 "action_on_timeout": "none", 00:14:43.680 "timeout_us": 0, 00:14:43.680 "timeout_admin_us": 0, 00:14:43.680 "keep_alive_timeout_ms": 10000, 00:14:43.680 "arbitration_burst": 0, 00:14:43.680 "low_priority_weight": 0, 00:14:43.680 "medium_priority_weight": 0, 00:14:43.680 "high_priority_weight": 0, 00:14:43.680 "nvme_adminq_poll_period_us": 10000, 00:14:43.680 "nvme_ioq_poll_period_us": 0, 00:14:43.680 "io_queue_requests": 0, 00:14:43.680 "delay_cmd_submit": true, 00:14:43.680 "transport_retry_count": 4, 00:14:43.680 "bdev_retry_count": 3, 00:14:43.680 "transport_ack_timeout": 0, 00:14:43.680 "ctrlr_loss_timeout_sec": 0, 00:14:43.680 "reconnect_delay_sec": 0, 00:14:43.680 "fast_io_fail_timeout_sec": 0, 00:14:43.680 "disable_auto_failback": false, 00:14:43.680 "generate_uuids": false, 00:14:43.680 "transport_tos": 0, 00:14:43.680 "nvme_error_stat": false, 00:14:43.680 "rdma_srq_size": 0, 00:14:43.680 "io_path_stat": false, 00:14:43.680 "allow_accel_sequence": false, 00:14:43.680 "rdma_max_cq_size": 0, 00:14:43.680 "rdma_cm_event_timeout_ms": 0, 00:14:43.680 "dhchap_digests": [ 00:14:43.680 "sha256", 00:14:43.680 "sha384", 00:14:43.680 "sha512" 00:14:43.680 ], 00:14:43.680 "dhchap_dhgroups": [ 00:14:43.680 "null", 00:14:43.680 "ffdhe2048", 00:14:43.680 "ffdhe3072", 00:14:43.680 "ffdhe4096", 00:14:43.680 "ffdhe6144", 00:14:43.680 "ffdhe8192" 00:14:43.680 ] 00:14:43.680 } 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "method": "bdev_nvme_set_hotplug", 00:14:43.680 "params": { 00:14:43.680 "period_us": 100000, 00:14:43.680 "enable": false 00:14:43.680 } 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "method": "bdev_malloc_create", 00:14:43.680 "params": { 00:14:43.680 "name": "malloc0", 00:14:43.680 "num_blocks": 8192, 00:14:43.680 "block_size": 4096, 00:14:43.680 "physical_block_size": 4096, 00:14:43.680 "uuid": "f7c56a0d-e239-45fb-8ea1-1ccfcd672ecb", 00:14:43.680 "optimal_io_boundary": 0 00:14:43.680 } 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "method": "bdev_wait_for_examine" 00:14:43.680 } 00:14:43.680 ] 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "subsystem": "scsi", 00:14:43.680 "config": null 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "subsystem": "scheduler", 00:14:43.680 "config": [ 00:14:43.680 { 00:14:43.680 "method": "framework_set_scheduler", 00:14:43.680 "params": { 00:14:43.680 "name": "static" 00:14:43.680 } 00:14:43.680 } 00:14:43.680 ] 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "subsystem": "vhost_scsi", 00:14:43.680 "config": [] 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "subsystem": "vhost_blk", 00:14:43.680 "config": [] 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "subsystem": "ublk", 00:14:43.680 "config": [ 00:14:43.680 { 00:14:43.680 "method": "ublk_create_target", 00:14:43.680 "params": { 00:14:43.680 "cpumask": "1" 00:14:43.680 } 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "method": "ublk_start_disk", 00:14:43.680 "params": { 00:14:43.680 "bdev_name": "malloc0", 00:14:43.680 "ublk_id": 0, 00:14:43.680 "num_queues": 1, 00:14:43.680 "queue_depth": 128 00:14:43.680 } 00:14:43.680 } 00:14:43.680 ] 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "subsystem": "nbd", 00:14:43.680 "config": [] 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "subsystem": "nvmf", 00:14:43.680 "config": [ 00:14:43.680 { 00:14:43.680 "method": "nvmf_set_config", 00:14:43.680 "params": { 00:14:43.680 "discovery_filter": "match_any", 00:14:43.680 "admin_cmd_passthru": { 00:14:43.680 "identify_ctrlr": false 00:14:43.680 } 00:14:43.680 } 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "method": "nvmf_set_max_subsystems", 00:14:43.680 "params": { 00:14:43.680 "max_subsystems": 1024 00:14:43.680 } 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "method": "nvmf_set_crdt", 00:14:43.680 "params": { 00:14:43.680 "crdt1": 0, 00:14:43.680 "crdt2": 0, 00:14:43.680 "crdt3": 0 00:14:43.680 } 00:14:43.680 } 00:14:43.680 ] 00:14:43.680 }, 00:14:43.680 { 00:14:43.680 "subsystem": "iscsi", 00:14:43.680 "config": [ 00:14:43.680 { 00:14:43.680 "method": "iscsi_set_options", 00:14:43.680 "params": { 00:14:43.680 "node_base": "iqn.2016-06.io.spdk", 00:14:43.680 "max_sessions": 128, 00:14:43.680 "max_connections_per_session": 2, 00:14:43.680 "max_queue_depth": 64, 00:14:43.680 "default_time2wait": 2, 00:14:43.680 "default_time2retain": 20, 00:14:43.680 "first_burst_length": 8192, 00:14:43.680 "immediate_data": true, 00:14:43.680 "allow_duplicated_isid": false, 00:14:43.680 "error_recovery_level": 0, 00:14:43.680 "nop_timeout": 60, 00:14:43.680 "nop_in_interval": 30, 00:14:43.680 "disable_chap": false, 00:14:43.680 "require_chap": false, 00:14:43.680 "mutual_chap": false, 00:14:43.680 "chap_group": 0, 00:14:43.680 "max_large_datain_per_connection": 64, 00:14:43.680 "max_r2t_per_connection": 4, 00:14:43.680 "pdu_pool_size": 36864, 00:14:43.680 "immediate_data_pool_size": 16384, 00:14:43.680 "data_out_pool_size": 2048 00:14:43.680 } 00:14:43.680 } 00:14:43.680 ] 00:14:43.680 } 00:14:43.680 ] 00:14:43.680 }' 00:14:43.680 20:21:13 -- ublk/ublk.sh@116 -- # killprocess 75402 00:14:43.680 20:21:13 -- common/autotest_common.sh@936 -- # '[' -z 75402 ']' 00:14:43.680 20:21:13 -- common/autotest_common.sh@940 -- # kill -0 75402 00:14:43.680 20:21:13 -- common/autotest_common.sh@941 -- # uname 00:14:43.680 20:21:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:43.680 20:21:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75402 00:14:43.680 20:21:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:43.680 20:21:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:43.680 20:21:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75402' 00:14:43.680 killing process with pid 75402 00:14:43.680 20:21:13 -- common/autotest_common.sh@955 -- # kill 75402 00:14:43.680 20:21:13 -- common/autotest_common.sh@960 -- # wait 75402 00:14:45.580 [2024-04-24 20:21:15.588047] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:45.580 [2024-04-24 20:21:15.626889] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:45.580 [2024-04-24 20:21:15.627062] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:45.580 [2024-04-24 20:21:15.635890] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:45.580 [2024-04-24 20:21:15.635945] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:45.580 [2024-04-24 20:21:15.635954] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:45.580 [2024-04-24 20:21:15.635982] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:45.580 [2024-04-24 20:21:15.639999] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:46.957 20:21:17 -- ublk/ublk.sh@119 -- # tgtpid=75473 00:14:46.957 20:21:17 -- ublk/ublk.sh@121 -- # waitforlisten 75473 00:14:46.957 20:21:17 -- common/autotest_common.sh@817 -- # '[' -z 75473 ']' 00:14:46.957 20:21:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.957 20:21:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:46.957 20:21:17 -- ublk/ublk.sh@118 -- # echo '{ 00:14:46.957 "subsystems": [ 00:14:46.957 { 00:14:46.957 "subsystem": "keyring", 00:14:46.957 "config": [] 00:14:46.957 }, 00:14:46.957 { 00:14:46.957 "subsystem": "iobuf", 00:14:46.957 "config": [ 00:14:46.957 { 00:14:46.957 "method": "iobuf_set_options", 00:14:46.957 "params": { 00:14:46.957 "small_pool_count": 8192, 00:14:46.957 "large_pool_count": 1024, 00:14:46.957 "small_bufsize": 8192, 00:14:46.957 "large_bufsize": 135168 00:14:46.957 } 00:14:46.957 } 00:14:46.957 ] 00:14:46.957 }, 00:14:46.957 { 00:14:46.957 "subsystem": "sock", 00:14:46.957 "config": [ 00:14:46.957 { 00:14:46.957 "method": "sock_impl_set_options", 00:14:46.957 "params": { 00:14:46.957 "impl_name": "posix", 00:14:46.957 "recv_buf_size": 2097152, 00:14:46.957 "send_buf_size": 2097152, 00:14:46.957 "enable_recv_pipe": true, 00:14:46.957 "enable_quickack": false, 00:14:46.957 "enable_placement_id": 0, 00:14:46.957 "enable_zerocopy_send_server": true, 00:14:46.957 "enable_zerocopy_send_client": false, 00:14:46.957 "zerocopy_threshold": 0, 00:14:46.957 "tls_version": 0, 00:14:46.957 "enable_ktls": false 00:14:46.957 } 00:14:46.957 }, 00:14:46.957 { 00:14:46.957 "method": "sock_impl_set_options", 00:14:46.957 "params": { 00:14:46.957 "impl_name": "ssl", 00:14:46.957 "recv_buf_size": 4096, 00:14:46.957 "send_buf_size": 4096, 00:14:46.957 "enable_recv_pipe": true, 00:14:46.957 "enable_quickack": false, 00:14:46.957 "enable_placement_id": 0, 00:14:46.957 "enable_zerocopy_send_server": true, 00:14:46.957 "enable_zerocopy_send_client": false, 00:14:46.957 "zerocopy_threshold": 0, 00:14:46.957 "tls_version": 0, 00:14:46.957 "enable_ktls": false 00:14:46.957 } 00:14:46.957 } 00:14:46.957 ] 00:14:46.957 }, 00:14:46.957 { 00:14:46.957 "subsystem": "vmd", 00:14:46.957 "config": [] 00:14:46.957 }, 00:14:46.957 { 00:14:46.957 "subsystem": "accel", 00:14:46.957 "config": [ 00:14:46.957 { 00:14:46.957 "method": "accel_set_options", 00:14:46.957 "params": { 00:14:46.957 "small_cache_size": 128, 00:14:46.957 "large_cache_size": 16, 00:14:46.957 "task_count": 2048, 00:14:46.957 "sequence_count": 2048, 00:14:46.957 "buf_count": 2048 00:14:46.957 } 00:14:46.957 } 00:14:46.957 ] 00:14:46.957 }, 00:14:46.957 { 00:14:46.957 "subsystem": "bdev", 00:14:46.957 "config": [ 00:14:46.957 { 00:14:46.957 "method": "bdev_set_options", 00:14:46.957 "params": { 00:14:46.957 "bdev_io_pool_size": 65535, 00:14:46.957 "bdev_io_cache_size": 256, 00:14:46.957 "bdev_auto_examine": true, 00:14:46.957 "iobuf_small_cache_size": 128, 00:14:46.957 "iobuf_large_cache_size": 16 00:14:46.957 } 00:14:46.957 }, 00:14:46.957 { 00:14:46.957 "method": "bdev_raid_set_options", 00:14:46.957 "params": { 00:14:46.957 "process_window_size_kb": 1024 00:14:46.957 } 00:14:46.957 }, 00:14:46.957 { 00:14:46.957 "method": "bdev_iscsi_set_options", 00:14:46.957 "params": { 00:14:46.957 "timeout_sec": 30 00:14:46.957 } 00:14:46.957 }, 00:14:46.957 { 00:14:46.957 "method": "bdev_nvme_set_options", 00:14:46.957 "params": { 00:14:46.957 "action_on_timeout": "none", 00:14:46.957 "timeout_us": 0, 00:14:46.957 "timeout_admin_us": 0, 00:14:46.957 "keep_alive_timeout_ms": 10000, 00:14:46.957 "arbitration_burst": 0, 00:14:46.957 "low_priority_weight": 0, 00:14:46.957 "medium_priority_weight": 0, 00:14:46.957 "high_priority_weight": 0, 00:14:46.958 "nvme_adminq_poll_period_us": 10000, 00:14:46.958 "nvme_ioq_poll_period_us": 0, 00:14:46.958 "io_queue_requests": 0, 00:14:46.958 "delay_cmd_submit": true, 00:14:46.958 "transport_retry_count": 4, 00:14:46.958 "bdev_retry_count": 3, 00:14:46.958 "transport_ack_timeout": 0, 00:14:46.958 "ctrlr_loss_timeout_sec": 0, 00:14:46.958 "reconnect_delay_sec": 0, 00:14:46.958 "fast_io_fail_timeout_sec": 0, 00:14:46.958 "disable_auto_failback": false, 00:14:46.958 "generate_uuids": false, 00:14:46.958 "transport_tos": 0, 00:14:46.958 "nvme_error_stat": false, 00:14:46.958 "rdma_srq_size": 0, 00:14:46.958 "io_path_stat": false, 00:14:46.958 "allow_accel_sequence": false, 00:14:46.958 "rdma_max_cq_size": 0, 00:14:46.958 "rdma_cm_event_timeout_ms": 0, 00:14:46.958 "dhchap_digests": [ 00:14:46.958 "sha256", 00:14:46.958 "sha384", 00:14:46.958 "sha512" 00:14:46.958 ], 00:14:46.958 "dhchap_dhgroups": [ 00:14:46.958 "null", 00:14:46.958 "ffdhe2048", 00:14:46.958 "ffdhe3072", 00:14:46.958 "ffdhe4096", 00:14:46.958 "ffdhe6144", 00:14:46.958 "ffdhe8192" 00:14:46.958 ] 00:14:46.958 } 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "method": "bdev_nvme_set_hotplug", 00:14:46.958 "params": { 00:14:46.958 "period_us": 100000, 00:14:46.958 "enable": false 00:14:46.958 } 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "method": "bdev_malloc_create", 00:14:46.958 "params": { 00:14:46.958 "name": "malloc0", 00:14:46.958 "num_blocks": 8192, 00:14:46.958 "block_size": 4096, 00:14:46.958 "physical_block_size": 4096, 00:14:46.958 "uuid": "f7c56a0d-e239-45fb-8ea1-1ccfcd672ecb", 00:14:46.958 "optimal_io_boundary": 0 00:14:46.958 } 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "method": "bdev_wait_for_examine" 00:14:46.958 } 00:14:46.958 ] 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "subsystem": "scsi", 00:14:46.958 "config": null 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "subsystem": "scheduler", 00:14:46.958 "config": [ 00:14:46.958 { 00:14:46.958 "method": "framework_set_scheduler", 00:14:46.958 "params": { 00:14:46.958 "name": "static" 00:14:46.958 } 00:14:46.958 } 00:14:46.958 ] 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "subsystem": "vhost_scsi", 00:14:46.958 "config": [] 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "subsystem": "vhost_blk", 00:14:46.958 "config": [] 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "subsystem": "ublk", 00:14:46.958 "config": [ 00:14:46.958 { 00:14:46.958 "method": "ublk_create_target", 00:14:46.958 "params": { 00:14:46.958 "cpumask": "1" 00:14:46.958 } 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "method": "ublk_start_disk", 00:14:46.958 "params": { 00:14:46.958 "bdev_name": "malloc0", 00:14:46.958 "ublk_id": 0, 00:14:46.958 "num_queues": 1, 00:14:46.958 "queue_depth": 128 00:14:46.958 } 00:14:46.958 } 00:14:46.958 ] 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "subsystem": "nbd", 00:14:46.958 "config": [] 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "subsystem": "nvmf", 00:14:46.958 "config": [ 00:14:46.958 { 00:14:46.958 "method": "nvmf_set_config", 00:14:46.958 "params": { 00:14:46.958 "discovery_filter": "match_any", 00:14:46.958 "admin_cmd_passthru": { 00:14:46.958 "identify_ctrlr": false 00:14:46.958 } 00:14:46.958 } 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "method": "nvmf_set_max_subsystems", 00:14:46.958 "params": { 00:14:46.958 "max_subsystems": 1024 00:14:46.958 } 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "method": "nvmf_set_crdt", 00:14:46.958 "params": { 00:14:46.958 "crdt1": 0, 00:14:46.958 "crdt2": 0, 00:14:46.958 "crdt3": 0 00:14:46.958 } 00:14:46.958 } 00:14:46.958 ] 00:14:46.958 }, 00:14:46.958 { 00:14:46.958 "subsystem": "iscsi", 00:14:46.958 "config": [ 00:14:46.958 { 00:14:46.958 "method": "iscsi_set_options", 00:14:46.958 "params": { 00:14:46.958 "node_base": "iqn.2016-06.io.spdk", 00:14:46.958 "max_sessions": 128, 00:14:46.958 "max_connections_per_session": 2, 00:14:46.958 "max_queue_depth": 64, 00:14:46.958 "default_time2wait": 2, 00:14:46.958 "default_time2retain": 20, 00:14:46.958 "first_burst_length": 8192, 00:14:46.958 "immediate_data": true, 00:14:46.958 "allow_duplicated_isid": false, 00:14:46.958 "error_recovery_level": 0, 00:14:46.958 "nop_timeout": 60, 00:14:46.958 "nop_in_interval": 30, 00:14:46.958 "disable_chap": false, 00:14:46.958 "require_chap": false, 00:14:46.958 "mutual_chap": false, 00:14:46.958 "chap_group": 0, 00:14:46.958 "max_large_datain_per_connection": 64, 00:14:46.958 "max_r2t_per_connection": 4, 00:14:46.958 "pdu_pool_size": 36864, 00:14:46.958 "immediate_data_pool_size": 16384, 00:14:46.958 "data_out_pool_size": 2048 00:14:46.958 } 00:14:46.958 } 00:14:46.958 ] 00:14:46.958 } 00:14:46.958 ] 00:14:46.958 }' 00:14:46.958 20:21:17 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:46.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.958 20:21:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.958 20:21:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:46.958 20:21:17 -- common/autotest_common.sh@10 -- # set +x 00:14:46.958 [2024-04-24 20:21:17.188525] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:14:46.958 [2024-04-24 20:21:17.188639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75473 ] 00:14:47.216 [2024-04-24 20:21:17.360283] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.473 [2024-04-24 20:21:17.610946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.845 [2024-04-24 20:21:18.715174] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:48.845 [2024-04-24 20:21:18.721985] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:48.845 [2024-04-24 20:21:18.722070] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:48.845 [2024-04-24 20:21:18.722086] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:48.845 [2024-04-24 20:21:18.722094] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:48.845 [2024-04-24 20:21:18.730950] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:48.845 [2024-04-24 20:21:18.730973] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:48.845 [2024-04-24 20:21:18.737888] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:48.845 [2024-04-24 20:21:18.737988] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:48.845 [2024-04-24 20:21:18.754901] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:48.845 20:21:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:48.845 20:21:18 -- common/autotest_common.sh@850 -- # return 0 00:14:48.845 20:21:18 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:48.845 20:21:18 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:48.845 20:21:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:48.845 20:21:18 -- common/autotest_common.sh@10 -- # set +x 00:14:48.845 20:21:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:48.845 20:21:18 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:48.845 20:21:18 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:48.845 20:21:18 -- ublk/ublk.sh@125 -- # killprocess 75473 00:14:48.845 20:21:18 -- common/autotest_common.sh@936 -- # '[' -z 75473 ']' 00:14:48.845 20:21:18 -- common/autotest_common.sh@940 -- # kill -0 75473 00:14:48.845 20:21:18 -- common/autotest_common.sh@941 -- # uname 00:14:48.845 20:21:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:48.845 20:21:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75473 00:14:48.845 20:21:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:48.845 killing process with pid 75473 00:14:48.845 20:21:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:48.845 20:21:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75473' 00:14:48.845 20:21:18 -- common/autotest_common.sh@955 -- # kill 75473 00:14:48.845 20:21:18 -- common/autotest_common.sh@960 -- # wait 75473 00:14:50.220 [2024-04-24 20:21:20.414742] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:50.220 [2024-04-24 20:21:20.446895] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:50.220 [2024-04-24 20:21:20.447098] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:50.220 [2024-04-24 20:21:20.452904] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:50.220 [2024-04-24 20:21:20.452968] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:50.220 [2024-04-24 20:21:20.452979] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:50.220 [2024-04-24 20:21:20.453009] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:14:50.220 [2024-04-24 20:21:20.453215] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:14:52.128 20:21:21 -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:52.128 00:14:52.128 real 0m9.932s 00:14:52.128 user 0m8.331s 00:14:52.128 sys 0m2.345s 00:14:52.128 20:21:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:52.128 20:21:21 -- common/autotest_common.sh@10 -- # set +x 00:14:52.128 ************************************ 00:14:52.128 END TEST test_save_ublk_config 00:14:52.128 ************************************ 00:14:52.128 20:21:21 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:52.128 20:21:21 -- ublk/ublk.sh@139 -- # spdk_pid=75559 00:14:52.128 20:21:21 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:52.128 20:21:21 -- ublk/ublk.sh@141 -- # waitforlisten 75559 00:14:52.128 20:21:21 -- common/autotest_common.sh@817 -- # '[' -z 75559 ']' 00:14:52.128 20:21:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.128 20:21:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:52.128 20:21:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.128 20:21:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:52.128 20:21:21 -- common/autotest_common.sh@10 -- # set +x 00:14:52.128 [2024-04-24 20:21:22.065552] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:14:52.128 [2024-04-24 20:21:22.065719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75559 ] 00:14:52.128 [2024-04-24 20:21:22.264243] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:52.387 [2024-04-24 20:21:22.503681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.387 [2024-04-24 20:21:22.503716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.323 20:21:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:53.323 20:21:23 -- common/autotest_common.sh@850 -- # return 0 00:14:53.323 20:21:23 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:53.323 20:21:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:53.323 20:21:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.323 20:21:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.323 ************************************ 00:14:53.323 START TEST test_create_ublk 00:14:53.323 ************************************ 00:14:53.324 20:21:23 -- common/autotest_common.sh@1111 -- # test_create_ublk 00:14:53.324 20:21:23 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:53.324 20:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.324 20:21:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.324 [2024-04-24 20:21:23.536972] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:53.324 20:21:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.324 20:21:23 -- ublk/ublk.sh@33 -- # ublk_target= 00:14:53.324 20:21:23 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:53.324 20:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.324 20:21:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.893 20:21:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.893 20:21:23 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:53.893 20:21:23 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:53.893 20:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.893 20:21:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.893 [2024-04-24 20:21:23.870071] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:53.893 [2024-04-24 20:21:23.870542] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:53.893 [2024-04-24 20:21:23.870559] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:53.893 [2024-04-24 20:21:23.870571] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:53.893 [2024-04-24 20:21:23.879159] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:53.893 [2024-04-24 20:21:23.879200] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:53.893 [2024-04-24 20:21:23.885895] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:53.893 [2024-04-24 20:21:23.900113] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:53.893 [2024-04-24 20:21:23.911020] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:53.893 20:21:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.893 20:21:23 -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:53.893 20:21:23 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:53.893 20:21:23 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:53.893 20:21:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:53.893 20:21:23 -- common/autotest_common.sh@10 -- # set +x 00:14:53.893 20:21:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:53.893 20:21:23 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:53.893 { 00:14:53.893 "ublk_device": "/dev/ublkb0", 00:14:53.893 "id": 0, 00:14:53.893 "queue_depth": 512, 00:14:53.893 "num_queues": 4, 00:14:53.893 "bdev_name": "Malloc0" 00:14:53.893 } 00:14:53.893 ]' 00:14:53.893 20:21:23 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:53.893 20:21:23 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:53.894 20:21:23 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:53.894 20:21:24 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:53.894 20:21:24 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:53.894 20:21:24 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:53.894 20:21:24 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:53.894 20:21:24 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:53.894 20:21:24 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:54.154 20:21:24 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:54.154 20:21:24 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:54.154 20:21:24 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:54.154 20:21:24 -- lvol/common.sh@41 -- # local offset=0 00:14:54.154 20:21:24 -- lvol/common.sh@42 -- # local size=134217728 00:14:54.154 20:21:24 -- lvol/common.sh@43 -- # local rw=write 00:14:54.154 20:21:24 -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:54.154 20:21:24 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:54.154 20:21:24 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:54.154 20:21:24 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:54.154 20:21:24 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:54.154 20:21:24 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:54.154 20:21:24 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:54.154 fio: verification read phase will never start because write phase uses all of runtime 00:14:54.154 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:54.154 fio-3.35 00:14:54.154 Starting 1 process 00:15:06.378 00:15:06.378 fio_test: (groupid=0, jobs=1): err= 0: pid=75615: Wed Apr 24 20:21:34 2024 00:15:06.378 write: IOPS=16.1k, BW=63.1MiB/s (66.2MB/s)(631MiB/10001msec); 0 zone resets 00:15:06.378 clat (usec): min=37, max=3878, avg=61.10, stdev=95.80 00:15:06.378 lat (usec): min=38, max=3878, avg=61.54, stdev=95.81 00:15:06.378 clat percentiles (usec): 00:15:06.378 | 1.00th=[ 41], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 55], 00:15:06.378 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:15:06.378 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 63], 95.00th=[ 67], 00:15:06.378 | 99.00th=[ 78], 99.50th=[ 84], 99.90th=[ 1942], 99.95th=[ 2737], 00:15:06.378 | 99.99th=[ 3490] 00:15:06.378 bw ( KiB/s): min=61704, max=71504, per=100.00%, avg=64673.26, stdev=1936.66, samples=19 00:15:06.378 iops : min=15426, max=17876, avg=16168.32, stdev=484.16, samples=19 00:15:06.378 lat (usec) : 50=3.37%, 100=96.38%, 250=0.05%, 500=0.01%, 750=0.02% 00:15:06.378 lat (usec) : 1000=0.01% 00:15:06.378 lat (msec) : 2=0.07%, 4=0.10% 00:15:06.378 cpu : usr=3.09%, sys=10.29%, ctx=161525, majf=0, minf=794 00:15:06.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:06.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:06.378 issued rwts: total=0,161524,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:06.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:06.378 00:15:06.378 Run status group 0 (all jobs): 00:15:06.378 WRITE: bw=63.1MiB/s (66.2MB/s), 63.1MiB/s-63.1MiB/s (66.2MB/s-66.2MB/s), io=631MiB (662MB), run=10001-10001msec 00:15:06.378 00:15:06.378 Disk stats (read/write): 00:15:06.378 ublkb0: ios=0/159844, merge=0/0, ticks=0/8642, in_queue=8642, util=99.15% 00:15:06.378 20:21:34 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:06.378 20:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.378 20:21:34 -- common/autotest_common.sh@10 -- # set +x 00:15:06.378 [2024-04-24 20:21:34.406180] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:06.378 [2024-04-24 20:21:34.445357] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:06.378 [2024-04-24 20:21:34.449004] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:06.378 [2024-04-24 20:21:34.455876] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:06.378 [2024-04-24 20:21:34.456177] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:06.378 [2024-04-24 20:21:34.456192] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:06.378 20:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.378 20:21:34 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:06.378 20:21:34 -- common/autotest_common.sh@638 -- # local es=0 00:15:06.378 20:21:34 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:06.378 20:21:34 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:15:06.378 20:21:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:06.378 20:21:34 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:15:06.378 20:21:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:06.378 20:21:34 -- common/autotest_common.sh@641 -- # rpc_cmd ublk_stop_disk 0 00:15:06.378 20:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.378 20:21:34 -- common/autotest_common.sh@10 -- # set +x 00:15:06.378 [2024-04-24 20:21:34.469029] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:06.378 request: 00:15:06.378 { 00:15:06.378 "ublk_id": 0, 00:15:06.378 "method": "ublk_stop_disk", 00:15:06.378 "req_id": 1 00:15:06.378 } 00:15:06.378 Got JSON-RPC error response 00:15:06.378 response: 00:15:06.378 { 00:15:06.378 "code": -19, 00:15:06.378 "message": "No such device" 00:15:06.378 } 00:15:06.378 20:21:34 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:15:06.378 20:21:34 -- common/autotest_common.sh@641 -- # es=1 00:15:06.378 20:21:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:06.378 20:21:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:06.378 20:21:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:06.378 20:21:34 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:06.378 20:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.378 20:21:34 -- common/autotest_common.sh@10 -- # set +x 00:15:06.378 [2024-04-24 20:21:34.487963] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:06.378 [2024-04-24 20:21:34.495883] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:06.378 [2024-04-24 20:21:34.495929] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:06.378 20:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.378 20:21:34 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:06.378 20:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.378 20:21:34 -- common/autotest_common.sh@10 -- # set +x 00:15:06.378 20:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.378 20:21:34 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:06.378 20:21:34 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:06.378 20:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.378 20:21:34 -- common/autotest_common.sh@10 -- # set +x 00:15:06.378 20:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.378 20:21:34 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:06.378 20:21:34 -- lvol/common.sh@26 -- # jq length 00:15:06.378 20:21:34 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:06.378 20:21:34 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:06.378 20:21:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.378 20:21:34 -- common/autotest_common.sh@10 -- # set +x 00:15:06.378 20:21:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.378 20:21:34 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:06.378 20:21:34 -- lvol/common.sh@28 -- # jq length 00:15:06.378 20:21:34 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:06.378 00:15:06.378 real 0m11.463s 00:15:06.378 user 0m0.692s 00:15:06.378 sys 0m1.162s 00:15:06.378 20:21:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:06.378 20:21:34 -- common/autotest_common.sh@10 -- # set +x 00:15:06.378 ************************************ 00:15:06.378 END TEST test_create_ublk 00:15:06.378 ************************************ 00:15:06.378 20:21:35 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:06.378 20:21:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:06.378 20:21:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:06.379 20:21:35 -- common/autotest_common.sh@10 -- # set +x 00:15:06.379 ************************************ 00:15:06.379 START TEST test_create_multi_ublk 00:15:06.379 ************************************ 00:15:06.379 20:21:35 -- common/autotest_common.sh@1111 -- # test_create_multi_ublk 00:15:06.379 20:21:35 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:06.379 20:21:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.379 20:21:35 -- common/autotest_common.sh@10 -- # set +x 00:15:06.379 [2024-04-24 20:21:35.158767] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:06.379 20:21:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.379 20:21:35 -- ublk/ublk.sh@62 -- # ublk_target= 00:15:06.379 20:21:35 -- ublk/ublk.sh@64 -- # seq 0 3 00:15:06.379 20:21:35 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:06.379 20:21:35 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:06.379 20:21:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.379 20:21:35 -- common/autotest_common.sh@10 -- # set +x 00:15:06.379 20:21:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.379 20:21:35 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:06.379 20:21:35 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:06.379 20:21:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.379 20:21:35 -- common/autotest_common.sh@10 -- # set +x 00:15:06.379 [2024-04-24 20:21:35.496039] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:06.379 [2024-04-24 20:21:35.496479] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:06.379 [2024-04-24 20:21:35.496496] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:06.379 [2024-04-24 20:21:35.496507] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:06.379 [2024-04-24 20:21:35.502884] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:06.379 [2024-04-24 20:21:35.502912] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:06.379 [2024-04-24 20:21:35.510894] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:06.379 [2024-04-24 20:21:35.511484] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:06.379 [2024-04-24 20:21:35.520246] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:06.379 20:21:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.379 20:21:35 -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:06.379 20:21:35 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:06.379 20:21:35 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:06.379 20:21:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.379 20:21:35 -- common/autotest_common.sh@10 -- # set +x 00:15:06.379 20:21:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.379 20:21:35 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:06.379 20:21:35 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:06.379 20:21:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.379 20:21:35 -- common/autotest_common.sh@10 -- # set +x 00:15:06.379 [2024-04-24 20:21:35.867031] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:06.379 [2024-04-24 20:21:35.867499] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:06.379 [2024-04-24 20:21:35.867520] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:06.379 [2024-04-24 20:21:35.867529] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:06.379 [2024-04-24 20:21:35.874918] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:06.379 [2024-04-24 20:21:35.874942] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:06.379 [2024-04-24 20:21:35.882897] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:06.379 [2024-04-24 20:21:35.883479] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:06.379 [2024-04-24 20:21:35.891927] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:06.379 20:21:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.379 20:21:35 -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:06.379 20:21:35 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:06.379 20:21:35 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:06.379 20:21:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.379 20:21:35 -- common/autotest_common.sh@10 -- # set +x 00:15:06.379 20:21:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.379 20:21:36 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:06.379 20:21:36 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:06.379 20:21:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.379 20:21:36 -- common/autotest_common.sh@10 -- # set +x 00:15:06.379 [2024-04-24 20:21:36.231085] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:06.379 [2024-04-24 20:21:36.231518] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:06.379 [2024-04-24 20:21:36.231534] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:06.379 [2024-04-24 20:21:36.231546] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:06.379 [2024-04-24 20:21:36.238909] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:06.379 [2024-04-24 20:21:36.238942] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:06.379 [2024-04-24 20:21:36.245878] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:06.379 [2024-04-24 20:21:36.246476] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:06.379 [2024-04-24 20:21:36.269911] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:06.379 20:21:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.379 20:21:36 -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:06.379 20:21:36 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:06.379 20:21:36 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:06.379 20:21:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.379 20:21:36 -- common/autotest_common.sh@10 -- # set +x 00:15:06.638 20:21:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.638 20:21:36 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:06.638 20:21:36 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:06.638 20:21:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.638 20:21:36 -- common/autotest_common.sh@10 -- # set +x 00:15:06.638 [2024-04-24 20:21:36.625070] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:06.638 [2024-04-24 20:21:36.625528] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:06.638 [2024-04-24 20:21:36.625548] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:06.638 [2024-04-24 20:21:36.625557] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:06.638 [2024-04-24 20:21:36.632934] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:06.638 [2024-04-24 20:21:36.632957] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:06.638 [2024-04-24 20:21:36.640921] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:06.638 [2024-04-24 20:21:36.641555] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:06.638 [2024-04-24 20:21:36.649952] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:06.638 20:21:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.638 20:21:36 -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:06.638 20:21:36 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:06.638 20:21:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:06.638 20:21:36 -- common/autotest_common.sh@10 -- # set +x 00:15:06.638 20:21:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:06.638 20:21:36 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:06.638 { 00:15:06.638 "ublk_device": "/dev/ublkb0", 00:15:06.638 "id": 0, 00:15:06.638 "queue_depth": 512, 00:15:06.638 "num_queues": 4, 00:15:06.638 "bdev_name": "Malloc0" 00:15:06.638 }, 00:15:06.638 { 00:15:06.638 "ublk_device": "/dev/ublkb1", 00:15:06.638 "id": 1, 00:15:06.638 "queue_depth": 512, 00:15:06.638 "num_queues": 4, 00:15:06.638 "bdev_name": "Malloc1" 00:15:06.638 }, 00:15:06.638 { 00:15:06.638 "ublk_device": "/dev/ublkb2", 00:15:06.638 "id": 2, 00:15:06.638 "queue_depth": 512, 00:15:06.638 "num_queues": 4, 00:15:06.638 "bdev_name": "Malloc2" 00:15:06.638 }, 00:15:06.638 { 00:15:06.638 "ublk_device": "/dev/ublkb3", 00:15:06.638 "id": 3, 00:15:06.638 "queue_depth": 512, 00:15:06.638 "num_queues": 4, 00:15:06.638 "bdev_name": "Malloc3" 00:15:06.638 } 00:15:06.638 ]' 00:15:06.638 20:21:36 -- ublk/ublk.sh@72 -- # seq 0 3 00:15:06.638 20:21:36 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:06.638 20:21:36 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:06.638 20:21:36 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:06.638 20:21:36 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:06.638 20:21:36 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:06.638 20:21:36 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:06.638 20:21:36 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:06.638 20:21:36 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:06.638 20:21:36 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:06.638 20:21:36 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:06.898 20:21:36 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:06.898 20:21:36 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:06.898 20:21:36 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:06.898 20:21:36 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:06.898 20:21:36 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:06.898 20:21:36 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:06.898 20:21:36 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:06.898 20:21:37 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:06.898 20:21:37 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:06.898 20:21:37 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:06.898 20:21:37 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:06.898 20:21:37 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:06.898 20:21:37 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:06.898 20:21:37 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:06.898 20:21:37 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:06.898 20:21:37 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:07.161 20:21:37 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:07.161 20:21:37 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:07.161 20:21:37 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:07.161 20:21:37 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:07.161 20:21:37 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:07.161 20:21:37 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:07.161 20:21:37 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:07.161 20:21:37 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:07.161 20:21:37 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:07.161 20:21:37 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:07.161 20:21:37 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:07.161 20:21:37 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:07.161 20:21:37 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:07.425 20:21:37 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:07.425 20:21:37 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:07.425 20:21:37 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:07.425 20:21:37 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:07.425 20:21:37 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:07.425 20:21:37 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:07.426 20:21:37 -- ublk/ublk.sh@85 -- # seq 0 3 00:15:07.426 20:21:37 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:07.426 20:21:37 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:07.426 20:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:07.426 20:21:37 -- common/autotest_common.sh@10 -- # set +x 00:15:07.426 [2024-04-24 20:21:37.512991] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:07.426 [2024-04-24 20:21:37.551386] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:07.426 [2024-04-24 20:21:37.552574] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:07.426 [2024-04-24 20:21:37.558933] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:07.426 [2024-04-24 20:21:37.559245] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:07.426 [2024-04-24 20:21:37.559260] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:07.426 20:21:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:07.426 20:21:37 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:07.426 20:21:37 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:07.426 20:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:07.426 20:21:37 -- common/autotest_common.sh@10 -- # set +x 00:15:07.426 [2024-04-24 20:21:37.573116] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:07.426 [2024-04-24 20:21:37.617940] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:07.426 [2024-04-24 20:21:37.619147] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:07.426 [2024-04-24 20:21:37.626971] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:07.426 [2024-04-24 20:21:37.627285] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:07.426 [2024-04-24 20:21:37.627306] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:07.426 20:21:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:07.426 20:21:37 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:07.426 20:21:37 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:07.426 20:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:07.426 20:21:37 -- common/autotest_common.sh@10 -- # set +x 00:15:07.426 [2024-04-24 20:21:37.642982] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:07.693 [2024-04-24 20:21:37.689935] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:07.693 [2024-04-24 20:21:37.691050] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:07.693 [2024-04-24 20:21:37.697895] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:07.693 [2024-04-24 20:21:37.698196] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:07.693 [2024-04-24 20:21:37.698216] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:07.693 20:21:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:07.693 20:21:37 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:07.693 20:21:37 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:07.693 20:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:07.693 20:21:37 -- common/autotest_common.sh@10 -- # set +x 00:15:07.693 [2024-04-24 20:21:37.713016] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:07.693 [2024-04-24 20:21:37.749334] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:07.693 [2024-04-24 20:21:37.750533] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:07.693 [2024-04-24 20:21:37.756900] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:07.693 [2024-04-24 20:21:37.757185] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:07.693 [2024-04-24 20:21:37.757206] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:07.693 20:21:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:07.693 20:21:37 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:07.962 [2024-04-24 20:21:37.946022] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:07.962 [2024-04-24 20:21:37.952776] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:07.962 [2024-04-24 20:21:37.952831] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:07.962 20:21:37 -- ublk/ublk.sh@93 -- # seq 0 3 00:15:07.962 20:21:37 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:07.962 20:21:37 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:07.962 20:21:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:07.962 20:21:37 -- common/autotest_common.sh@10 -- # set +x 00:15:08.233 20:21:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:08.233 20:21:38 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.233 20:21:38 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:08.233 20:21:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:08.233 20:21:38 -- common/autotest_common.sh@10 -- # set +x 00:15:08.506 20:21:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:08.506 20:21:38 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:08.506 20:21:38 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:08.506 20:21:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:08.506 20:21:38 -- common/autotest_common.sh@10 -- # set +x 00:15:09.083 20:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.083 20:21:39 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:09.083 20:21:39 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:09.083 20:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.083 20:21:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.342 20:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.342 20:21:39 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:09.342 20:21:39 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:09.342 20:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.342 20:21:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.342 20:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.342 20:21:39 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:09.342 20:21:39 -- lvol/common.sh@26 -- # jq length 00:15:09.342 20:21:39 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:09.342 20:21:39 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:09.342 20:21:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:09.342 20:21:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.342 20:21:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:09.342 20:21:39 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:09.342 20:21:39 -- lvol/common.sh@28 -- # jq length 00:15:09.342 20:21:39 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:09.342 00:15:09.342 real 0m4.401s 00:15:09.342 user 0m0.971s 00:15:09.342 sys 0m0.225s 00:15:09.342 20:21:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:09.342 20:21:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.342 ************************************ 00:15:09.342 END TEST test_create_multi_ublk 00:15:09.342 ************************************ 00:15:09.653 20:21:39 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:09.653 20:21:39 -- ublk/ublk.sh@147 -- # cleanup 00:15:09.653 20:21:39 -- ublk/ublk.sh@130 -- # killprocess 75559 00:15:09.653 20:21:39 -- common/autotest_common.sh@936 -- # '[' -z 75559 ']' 00:15:09.653 20:21:39 -- common/autotest_common.sh@940 -- # kill -0 75559 00:15:09.653 20:21:39 -- common/autotest_common.sh@941 -- # uname 00:15:09.653 20:21:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:09.653 20:21:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75559 00:15:09.653 20:21:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:09.653 20:21:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:09.653 killing process with pid 75559 00:15:09.653 20:21:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75559' 00:15:09.653 20:21:39 -- common/autotest_common.sh@955 -- # kill 75559 00:15:09.653 20:21:39 -- common/autotest_common.sh@960 -- # wait 75559 00:15:10.605 [2024-04-24 20:21:40.785452] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:10.605 [2024-04-24 20:21:40.785519] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:11.983 ************************************ 00:15:11.983 END TEST ublk 00:15:11.983 ************************************ 00:15:11.983 00:15:11.983 real 0m30.334s 00:15:11.983 user 0m44.827s 00:15:11.983 sys 0m8.797s 00:15:11.983 20:21:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:11.983 20:21:42 -- common/autotest_common.sh@10 -- # set +x 00:15:11.983 20:21:42 -- spdk/autotest.sh@250 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:11.983 20:21:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:11.983 20:21:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:11.983 20:21:42 -- common/autotest_common.sh@10 -- # set +x 00:15:11.983 ************************************ 00:15:11.983 START TEST ublk_recovery 00:15:11.983 ************************************ 00:15:11.983 20:21:42 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:12.241 * Looking for test storage... 00:15:12.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:12.241 20:21:42 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:12.241 20:21:42 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:12.241 20:21:42 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:12.241 20:21:42 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:12.241 20:21:42 -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:12.241 20:21:42 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:12.241 20:21:42 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:12.241 20:21:42 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:12.241 20:21:42 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:12.241 20:21:42 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:12.241 20:21:42 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75975 00:15:12.241 20:21:42 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:12.241 20:21:42 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:12.241 20:21:42 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75975 00:15:12.241 20:21:42 -- common/autotest_common.sh@817 -- # '[' -z 75975 ']' 00:15:12.241 20:21:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.241 20:21:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:12.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.241 20:21:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.241 20:21:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:12.241 20:21:42 -- common/autotest_common.sh@10 -- # set +x 00:15:12.241 [2024-04-24 20:21:42.453170] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:15:12.241 [2024-04-24 20:21:42.453286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75975 ] 00:15:12.500 [2024-04-24 20:21:42.626986] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:12.807 [2024-04-24 20:21:42.871095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.807 [2024-04-24 20:21:42.871128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.747 20:21:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:13.747 20:21:43 -- common/autotest_common.sh@850 -- # return 0 00:15:13.747 20:21:43 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:13.747 20:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:13.747 20:21:43 -- common/autotest_common.sh@10 -- # set +x 00:15:13.747 [2024-04-24 20:21:43.830926] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:13.747 20:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:13.747 20:21:43 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:13.747 20:21:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:13.747 20:21:43 -- common/autotest_common.sh@10 -- # set +x 00:15:14.005 malloc0 00:15:14.005 20:21:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.005 20:21:44 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:14.005 20:21:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.005 20:21:44 -- common/autotest_common.sh@10 -- # set +x 00:15:14.005 [2024-04-24 20:21:44.012016] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:14.005 [2024-04-24 20:21:44.012136] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:14.005 [2024-04-24 20:21:44.012147] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:14.005 [2024-04-24 20:21:44.012158] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:14.005 [2024-04-24 20:21:44.020979] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:14.005 [2024-04-24 20:21:44.021026] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:14.005 [2024-04-24 20:21:44.027890] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:14.005 [2024-04-24 20:21:44.028067] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:14.005 [2024-04-24 20:21:44.042897] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:14.005 1 00:15:14.005 20:21:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.005 20:21:44 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:14.940 20:21:45 -- ublk/ublk_recovery.sh@31 -- # fio_proc=76021 00:15:14.940 20:21:45 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:14.940 20:21:45 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:14.940 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:14.940 fio-3.35 00:15:14.940 Starting 1 process 00:15:20.205 20:21:50 -- ublk/ublk_recovery.sh@36 -- # kill -9 75975 00:15:20.205 20:21:50 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:25.621 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75975 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:25.621 20:21:55 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76128 00:15:25.621 20:21:55 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:25.621 20:21:55 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:25.621 20:21:55 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76128 00:15:25.621 20:21:55 -- common/autotest_common.sh@817 -- # '[' -z 76128 ']' 00:15:25.621 20:21:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.621 20:21:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:25.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.621 20:21:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.621 20:21:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:25.621 20:21:55 -- common/autotest_common.sh@10 -- # set +x 00:15:25.621 [2024-04-24 20:21:55.170225] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:15:25.621 [2024-04-24 20:21:55.170346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76128 ] 00:15:25.621 [2024-04-24 20:21:55.337536] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:25.621 [2024-04-24 20:21:55.578454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.621 [2024-04-24 20:21:55.578488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.554 20:21:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:26.554 20:21:56 -- common/autotest_common.sh@850 -- # return 0 00:15:26.554 20:21:56 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:26.554 20:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.554 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:15:26.554 [2024-04-24 20:21:56.552137] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:26.554 20:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.554 20:21:56 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:26.554 20:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.554 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:15:26.554 malloc0 00:15:26.554 20:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.554 20:21:56 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:26.554 20:21:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.554 20:21:56 -- common/autotest_common.sh@10 -- # set +x 00:15:26.554 [2024-04-24 20:21:56.741043] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:26.554 [2024-04-24 20:21:56.741095] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:26.554 [2024-04-24 20:21:56.741105] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:26.554 [2024-04-24 20:21:56.748912] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:26.554 [2024-04-24 20:21:56.748935] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:26.554 [2024-04-24 20:21:56.749029] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:26.554 1 00:15:26.555 20:21:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.555 20:21:56 -- ublk/ublk_recovery.sh@52 -- # wait 76021 00:15:26.555 [2024-04-24 20:21:56.756876] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:26.555 [2024-04-24 20:21:56.760692] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:26.555 [2024-04-24 20:21:56.764097] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:26.555 [2024-04-24 20:21:56.764124] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:22.786 00:16:22.787 fio_test: (groupid=0, jobs=1): err= 0: pid=76024: Wed Apr 24 20:22:45 2024 00:16:22.787 read: IOPS=21.8k, BW=85.0MiB/s (89.2MB/s)(5103MiB/60002msec) 00:16:22.787 slat (nsec): min=1901, max=862582, avg=7449.74, stdev=2836.16 00:16:22.787 clat (usec): min=1038, max=6713.1k, avg=2861.40, stdev=44706.44 00:16:22.787 lat (usec): min=1045, max=6713.1k, avg=2868.85, stdev=44706.44 00:16:22.787 clat percentiles (usec): 00:16:22.787 | 1.00th=[ 1991], 5.00th=[ 2180], 10.00th=[ 2245], 20.00th=[ 2311], 00:16:22.787 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2474], 00:16:22.787 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2966], 95.00th=[ 3785], 00:16:22.787 | 99.00th=[ 5014], 99.50th=[ 5604], 99.90th=[ 6980], 99.95th=[ 7373], 00:16:22.787 | 99.99th=[10290] 00:16:22.787 bw ( KiB/s): min= 1672, max=104862, per=100.00%, avg=96848.93, stdev=12449.81, samples=107 00:16:22.787 iops : min= 418, max=26215, avg=24212.24, stdev=3112.45, samples=107 00:16:22.787 write: IOPS=21.7k, BW=84.9MiB/s (89.1MB/s)(5097MiB/60002msec); 0 zone resets 00:16:22.787 slat (nsec): min=1969, max=2161.2k, avg=7462.34, stdev=3440.07 00:16:22.787 clat (usec): min=933, max=6713.3k, avg=3005.21, stdev=49142.05 00:16:22.787 lat (usec): min=938, max=6713.3k, avg=3012.67, stdev=49142.06 00:16:22.787 clat percentiles (usec): 00:16:22.787 | 1.00th=[ 2008], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2409], 00:16:22.787 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2573], 00:16:22.787 | 70.00th=[ 2638], 80.00th=[ 2704], 90.00th=[ 2966], 95.00th=[ 3785], 00:16:22.787 | 99.00th=[ 5014], 99.50th=[ 5669], 99.90th=[ 7111], 99.95th=[ 7439], 00:16:22.787 | 99.99th=[10683] 00:16:22.787 bw ( KiB/s): min= 1984, max=104472, per=100.00%, avg=96735.97, stdev=12340.75, samples=107 00:16:22.787 iops : min= 496, max=26118, avg=24184.01, stdev=3085.19, samples=107 00:16:22.787 lat (usec) : 1000=0.01% 00:16:22.787 lat (msec) : 2=1.04%, 4=94.99%, 10=3.95%, 20=0.01%, >=2000=0.01% 00:16:22.787 cpu : usr=12.47%, sys=31.51%, ctx=110897, majf=0, minf=13 00:16:22.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:22.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:22.787 issued rwts: total=1306277,1304750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:22.787 00:16:22.787 Run status group 0 (all jobs): 00:16:22.787 READ: bw=85.0MiB/s (89.2MB/s), 85.0MiB/s-85.0MiB/s (89.2MB/s-89.2MB/s), io=5103MiB (5351MB), run=60002-60002msec 00:16:22.787 WRITE: bw=84.9MiB/s (89.1MB/s), 84.9MiB/s-84.9MiB/s (89.1MB/s-89.1MB/s), io=5097MiB (5344MB), run=60002-60002msec 00:16:22.787 00:16:22.787 Disk stats (read/write): 00:16:22.787 ublkb1: ios=1303474/1301886, merge=0/0, ticks=3619228/3668075, in_queue=7287303, util=99.95% 00:16:22.787 20:22:45 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:22.787 20:22:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.787 20:22:45 -- common/autotest_common.sh@10 -- # set +x 00:16:22.787 [2024-04-24 20:22:45.331088] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:22.787 [2024-04-24 20:22:45.375021] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:22.787 [2024-04-24 20:22:45.375278] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:22.787 [2024-04-24 20:22:45.380937] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:22.787 [2024-04-24 20:22:45.381097] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:22.787 [2024-04-24 20:22:45.381115] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:22.787 20:22:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.787 20:22:45 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:22.787 20:22:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.787 20:22:45 -- common/autotest_common.sh@10 -- # set +x 00:16:22.787 [2024-04-24 20:22:45.393986] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:22.787 [2024-04-24 20:22:45.404880] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:22.787 [2024-04-24 20:22:45.404925] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:22.787 20:22:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.787 20:22:45 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:22.787 20:22:45 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:22.787 20:22:45 -- ublk/ublk_recovery.sh@14 -- # killprocess 76128 00:16:22.787 20:22:45 -- common/autotest_common.sh@936 -- # '[' -z 76128 ']' 00:16:22.787 20:22:45 -- common/autotest_common.sh@940 -- # kill -0 76128 00:16:22.787 20:22:45 -- common/autotest_common.sh@941 -- # uname 00:16:22.787 20:22:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.787 20:22:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76128 00:16:22.787 20:22:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:22.787 killing process with pid 76128 00:16:22.787 20:22:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:22.787 20:22:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76128' 00:16:22.787 20:22:45 -- common/autotest_common.sh@955 -- # kill 76128 00:16:22.787 20:22:45 -- common/autotest_common.sh@960 -- # wait 76128 00:16:22.787 [2024-04-24 20:22:46.613589] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:16:22.787 [2024-04-24 20:22:46.613660] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:16:22.787 00:16:22.787 real 1m5.917s 00:16:22.787 user 1m49.741s 00:16:22.787 sys 0m37.174s 00:16:22.787 20:22:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:22.787 ************************************ 00:16:22.787 END TEST ublk_recovery 00:16:22.787 ************************************ 00:16:22.787 20:22:48 -- common/autotest_common.sh@10 -- # set +x 00:16:22.787 20:22:48 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:16:22.787 20:22:48 -- spdk/autotest.sh@258 -- # timing_exit lib 00:16:22.787 20:22:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:22.787 20:22:48 -- common/autotest_common.sh@10 -- # set +x 00:16:22.787 20:22:48 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:16:22.787 20:22:48 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:16:22.787 20:22:48 -- spdk/autotest.sh@277 -- # '[' 0 -eq 1 ']' 00:16:22.787 20:22:48 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:16:22.787 20:22:48 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:16:22.787 20:22:48 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:16:22.787 20:22:48 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:16:22.787 20:22:48 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:16:22.787 20:22:48 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:16:22.787 20:22:48 -- spdk/autotest.sh@337 -- # '[' 1 -eq 1 ']' 00:16:22.787 20:22:48 -- spdk/autotest.sh@338 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:22.787 20:22:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:22.787 20:22:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:22.787 20:22:48 -- common/autotest_common.sh@10 -- # set +x 00:16:22.787 ************************************ 00:16:22.787 START TEST ftl 00:16:22.787 ************************************ 00:16:22.787 20:22:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:22.787 * Looking for test storage... 00:16:22.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.787 20:22:48 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:22.787 20:22:48 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:22.787 20:22:48 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.787 20:22:48 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:22.787 20:22:48 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:22.787 20:22:48 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:22.787 20:22:48 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.787 20:22:48 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:22.787 20:22:48 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:22.787 20:22:48 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.787 20:22:48 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.787 20:22:48 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:22.787 20:22:48 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:22.787 20:22:48 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:22.787 20:22:48 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:22.787 20:22:48 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:22.787 20:22:48 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:22.787 20:22:48 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.787 20:22:48 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:22.787 20:22:48 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:22.787 20:22:48 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:22.787 20:22:48 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:22.787 20:22:48 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:22.787 20:22:48 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:22.787 20:22:48 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:22.787 20:22:48 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:22.787 20:22:48 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:22.787 20:22:48 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.787 20:22:48 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:22.787 20:22:48 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:22.787 20:22:48 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:22.787 20:22:48 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:22.787 20:22:48 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:22.787 20:22:48 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:22.787 20:22:48 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:22.787 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:22.787 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:22.787 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:22.787 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:22.787 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:22.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.787 20:22:49 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76934 00:16:22.787 20:22:49 -- ftl/ftl.sh@38 -- # waitforlisten 76934 00:16:22.787 20:22:49 -- common/autotest_common.sh@817 -- # '[' -z 76934 ']' 00:16:22.787 20:22:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.787 20:22:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:22.787 20:22:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.787 20:22:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:22.787 20:22:49 -- common/autotest_common.sh@10 -- # set +x 00:16:22.787 20:22:49 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:22.787 [2024-04-24 20:22:49.430148] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:16:22.787 [2024-04-24 20:22:49.430268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76934 ] 00:16:22.787 [2024-04-24 20:22:49.602279] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.787 [2024-04-24 20:22:49.849371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.787 20:22:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:22.787 20:22:50 -- common/autotest_common.sh@850 -- # return 0 00:16:22.787 20:22:50 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:22.787 20:22:50 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:22.787 20:22:51 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:22.787 20:22:51 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:22.787 20:22:51 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:22.787 20:22:51 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:22.787 20:22:51 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:22.787 20:22:52 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:22.787 20:22:52 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:22.787 20:22:52 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:22.787 20:22:52 -- ftl/ftl.sh@50 -- # break 00:16:22.787 20:22:52 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:22.787 20:22:52 -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:22.787 20:22:52 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:22.787 20:22:52 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:22.787 20:22:52 -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:22.787 20:22:52 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:22.787 20:22:52 -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:22.787 20:22:52 -- ftl/ftl.sh@63 -- # break 00:16:22.787 20:22:52 -- ftl/ftl.sh@66 -- # killprocess 76934 00:16:22.787 20:22:52 -- common/autotest_common.sh@936 -- # '[' -z 76934 ']' 00:16:22.787 20:22:52 -- common/autotest_common.sh@940 -- # kill -0 76934 00:16:22.787 20:22:52 -- common/autotest_common.sh@941 -- # uname 00:16:22.787 20:22:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:22.787 20:22:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76934 00:16:22.787 20:22:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:22.787 killing process with pid 76934 00:16:22.787 20:22:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:22.787 20:22:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76934' 00:16:22.787 20:22:52 -- common/autotest_common.sh@955 -- # kill 76934 00:16:22.787 20:22:52 -- common/autotest_common.sh@960 -- # wait 76934 00:16:24.690 20:22:54 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:24.690 20:22:54 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:16:24.690 20:22:54 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:24.690 20:22:54 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:24.690 20:22:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:24.690 20:22:54 -- common/autotest_common.sh@10 -- # set +x 00:16:24.951 ************************************ 00:16:24.951 START TEST ftl_fio_basic 00:16:24.951 ************************************ 00:16:24.951 20:22:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:24.951 * Looking for test storage... 00:16:24.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.951 20:22:55 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:24.951 20:22:55 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:24.951 20:22:55 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.951 20:22:55 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.951 20:22:55 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:24.951 20:22:55 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:24.951 20:22:55 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.951 20:22:55 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:24.951 20:22:55 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:24.951 20:22:55 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.951 20:22:55 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.951 20:22:55 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:24.951 20:22:55 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:24.951 20:22:55 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:24.951 20:22:55 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:24.951 20:22:55 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:24.951 20:22:55 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:24.951 20:22:55 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.951 20:22:55 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.951 20:22:55 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:24.951 20:22:55 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:24.951 20:22:55 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:24.951 20:22:55 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:24.951 20:22:55 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:24.951 20:22:55 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:24.951 20:22:55 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:24.951 20:22:55 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:24.951 20:22:55 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.951 20:22:55 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.951 20:22:55 -- ftl/fio.sh@11 -- # declare -A suite 00:16:24.951 20:22:55 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:24.951 20:22:55 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:24.951 20:22:55 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:24.951 20:22:55 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.951 20:22:55 -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:24.951 20:22:55 -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:24.951 20:22:55 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:24.951 20:22:55 -- ftl/fio.sh@26 -- # uuid= 00:16:24.951 20:22:55 -- ftl/fio.sh@27 -- # timeout=240 00:16:24.951 20:22:55 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:24.951 20:22:55 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:24.951 20:22:55 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:24.951 20:22:55 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:24.951 20:22:55 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:24.951 20:22:55 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:24.951 20:22:55 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:24.951 20:22:55 -- ftl/fio.sh@45 -- # svcpid=77081 00:16:24.951 20:22:55 -- ftl/fio.sh@46 -- # waitforlisten 77081 00:16:24.951 20:22:55 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:24.951 20:22:55 -- common/autotest_common.sh@817 -- # '[' -z 77081 ']' 00:16:24.951 20:22:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.951 20:22:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:24.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.951 20:22:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.951 20:22:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:24.951 20:22:55 -- common/autotest_common.sh@10 -- # set +x 00:16:25.210 [2024-04-24 20:22:55.256621] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:16:25.210 [2024-04-24 20:22:55.256735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77081 ] 00:16:25.210 [2024-04-24 20:22:55.429314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:25.469 [2024-04-24 20:22:55.682493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.470 [2024-04-24 20:22:55.682611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.470 [2024-04-24 20:22:55.682647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.845 20:22:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:26.845 20:22:56 -- common/autotest_common.sh@850 -- # return 0 00:16:26.845 20:22:56 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:26.845 20:22:56 -- ftl/common.sh@54 -- # local name=nvme0 00:16:26.845 20:22:56 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:26.845 20:22:56 -- ftl/common.sh@56 -- # local size=103424 00:16:26.845 20:22:56 -- ftl/common.sh@59 -- # local base_bdev 00:16:26.845 20:22:56 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:26.845 20:22:56 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:26.845 20:22:56 -- ftl/common.sh@62 -- # local base_size 00:16:26.845 20:22:56 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:26.845 20:22:56 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:16:26.845 20:22:56 -- common/autotest_common.sh@1365 -- # local bdev_info 00:16:26.845 20:22:56 -- common/autotest_common.sh@1366 -- # local bs 00:16:26.845 20:22:56 -- common/autotest_common.sh@1367 -- # local nb 00:16:26.845 20:22:56 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:27.103 20:22:57 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:16:27.103 { 00:16:27.103 "name": "nvme0n1", 00:16:27.103 "aliases": [ 00:16:27.103 "2bc2c45b-bfc4-4bf8-8360-9736fb6edb1c" 00:16:27.103 ], 00:16:27.103 "product_name": "NVMe disk", 00:16:27.103 "block_size": 4096, 00:16:27.104 "num_blocks": 1310720, 00:16:27.104 "uuid": "2bc2c45b-bfc4-4bf8-8360-9736fb6edb1c", 00:16:27.104 "assigned_rate_limits": { 00:16:27.104 "rw_ios_per_sec": 0, 00:16:27.104 "rw_mbytes_per_sec": 0, 00:16:27.104 "r_mbytes_per_sec": 0, 00:16:27.104 "w_mbytes_per_sec": 0 00:16:27.104 }, 00:16:27.104 "claimed": false, 00:16:27.104 "zoned": false, 00:16:27.104 "supported_io_types": { 00:16:27.104 "read": true, 00:16:27.104 "write": true, 00:16:27.104 "unmap": true, 00:16:27.104 "write_zeroes": true, 00:16:27.104 "flush": true, 00:16:27.104 "reset": true, 00:16:27.104 "compare": true, 00:16:27.104 "compare_and_write": false, 00:16:27.104 "abort": true, 00:16:27.104 "nvme_admin": true, 00:16:27.104 "nvme_io": true 00:16:27.104 }, 00:16:27.104 "driver_specific": { 00:16:27.104 "nvme": [ 00:16:27.104 { 00:16:27.104 "pci_address": "0000:00:11.0", 00:16:27.104 "trid": { 00:16:27.104 "trtype": "PCIe", 00:16:27.104 "traddr": "0000:00:11.0" 00:16:27.104 }, 00:16:27.104 "ctrlr_data": { 00:16:27.104 "cntlid": 0, 00:16:27.104 "vendor_id": "0x1b36", 00:16:27.104 "model_number": "QEMU NVMe Ctrl", 00:16:27.104 "serial_number": "12341", 00:16:27.104 "firmware_revision": "8.0.0", 00:16:27.104 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:27.104 "oacs": { 00:16:27.104 "security": 0, 00:16:27.104 "format": 1, 00:16:27.104 "firmware": 0, 00:16:27.104 "ns_manage": 1 00:16:27.104 }, 00:16:27.104 "multi_ctrlr": false, 00:16:27.104 "ana_reporting": false 00:16:27.104 }, 00:16:27.104 "vs": { 00:16:27.104 "nvme_version": "1.4" 00:16:27.104 }, 00:16:27.104 "ns_data": { 00:16:27.104 "id": 1, 00:16:27.104 "can_share": false 00:16:27.104 } 00:16:27.104 } 00:16:27.104 ], 00:16:27.104 "mp_policy": "active_passive" 00:16:27.104 } 00:16:27.104 } 00:16:27.104 ]' 00:16:27.104 20:22:57 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:16:27.104 20:22:57 -- common/autotest_common.sh@1369 -- # bs=4096 00:16:27.104 20:22:57 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:16:27.104 20:22:57 -- common/autotest_common.sh@1370 -- # nb=1310720 00:16:27.104 20:22:57 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:16:27.104 20:22:57 -- common/autotest_common.sh@1374 -- # echo 5120 00:16:27.104 20:22:57 -- ftl/common.sh@63 -- # base_size=5120 00:16:27.104 20:22:57 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:27.104 20:22:57 -- ftl/common.sh@67 -- # clear_lvols 00:16:27.104 20:22:57 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:27.104 20:22:57 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:27.362 20:22:57 -- ftl/common.sh@28 -- # stores= 00:16:27.362 20:22:57 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:27.620 20:22:57 -- ftl/common.sh@68 -- # lvs=698f1177-274f-4ef8-be65-2f3a4f3f144b 00:16:27.620 20:22:57 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 698f1177-274f-4ef8-be65-2f3a4f3f144b 00:16:27.878 20:22:57 -- ftl/fio.sh@48 -- # split_bdev=067c30f5-6917-4596-998b-9f6069e03343 00:16:27.878 20:22:57 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 067c30f5-6917-4596-998b-9f6069e03343 00:16:27.878 20:22:57 -- ftl/common.sh@35 -- # local name=nvc0 00:16:27.878 20:22:57 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:27.878 20:22:57 -- ftl/common.sh@37 -- # local base_bdev=067c30f5-6917-4596-998b-9f6069e03343 00:16:27.878 20:22:57 -- ftl/common.sh@38 -- # local cache_size= 00:16:27.878 20:22:57 -- ftl/common.sh@41 -- # get_bdev_size 067c30f5-6917-4596-998b-9f6069e03343 00:16:27.878 20:22:57 -- common/autotest_common.sh@1364 -- # local bdev_name=067c30f5-6917-4596-998b-9f6069e03343 00:16:27.878 20:22:57 -- common/autotest_common.sh@1365 -- # local bdev_info 00:16:27.878 20:22:57 -- common/autotest_common.sh@1366 -- # local bs 00:16:27.878 20:22:57 -- common/autotest_common.sh@1367 -- # local nb 00:16:27.878 20:22:57 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 067c30f5-6917-4596-998b-9f6069e03343 00:16:28.137 20:22:58 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:16:28.137 { 00:16:28.137 "name": "067c30f5-6917-4596-998b-9f6069e03343", 00:16:28.137 "aliases": [ 00:16:28.137 "lvs/nvme0n1p0" 00:16:28.137 ], 00:16:28.137 "product_name": "Logical Volume", 00:16:28.137 "block_size": 4096, 00:16:28.137 "num_blocks": 26476544, 00:16:28.137 "uuid": "067c30f5-6917-4596-998b-9f6069e03343", 00:16:28.137 "assigned_rate_limits": { 00:16:28.137 "rw_ios_per_sec": 0, 00:16:28.137 "rw_mbytes_per_sec": 0, 00:16:28.137 "r_mbytes_per_sec": 0, 00:16:28.137 "w_mbytes_per_sec": 0 00:16:28.137 }, 00:16:28.137 "claimed": false, 00:16:28.137 "zoned": false, 00:16:28.137 "supported_io_types": { 00:16:28.137 "read": true, 00:16:28.137 "write": true, 00:16:28.137 "unmap": true, 00:16:28.137 "write_zeroes": true, 00:16:28.137 "flush": false, 00:16:28.137 "reset": true, 00:16:28.137 "compare": false, 00:16:28.137 "compare_and_write": false, 00:16:28.137 "abort": false, 00:16:28.137 "nvme_admin": false, 00:16:28.137 "nvme_io": false 00:16:28.137 }, 00:16:28.137 "driver_specific": { 00:16:28.137 "lvol": { 00:16:28.137 "lvol_store_uuid": "698f1177-274f-4ef8-be65-2f3a4f3f144b", 00:16:28.137 "base_bdev": "nvme0n1", 00:16:28.137 "thin_provision": true, 00:16:28.137 "snapshot": false, 00:16:28.137 "clone": false, 00:16:28.137 "esnap_clone": false 00:16:28.137 } 00:16:28.137 } 00:16:28.137 } 00:16:28.137 ]' 00:16:28.137 20:22:58 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:16:28.137 20:22:58 -- common/autotest_common.sh@1369 -- # bs=4096 00:16:28.137 20:22:58 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:16:28.137 20:22:58 -- common/autotest_common.sh@1370 -- # nb=26476544 00:16:28.137 20:22:58 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:16:28.137 20:22:58 -- common/autotest_common.sh@1374 -- # echo 103424 00:16:28.137 20:22:58 -- ftl/common.sh@41 -- # local base_size=5171 00:16:28.137 20:22:58 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:28.137 20:22:58 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:28.395 20:22:58 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:28.395 20:22:58 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:28.395 20:22:58 -- ftl/common.sh@48 -- # get_bdev_size 067c30f5-6917-4596-998b-9f6069e03343 00:16:28.395 20:22:58 -- common/autotest_common.sh@1364 -- # local bdev_name=067c30f5-6917-4596-998b-9f6069e03343 00:16:28.395 20:22:58 -- common/autotest_common.sh@1365 -- # local bdev_info 00:16:28.395 20:22:58 -- common/autotest_common.sh@1366 -- # local bs 00:16:28.395 20:22:58 -- common/autotest_common.sh@1367 -- # local nb 00:16:28.395 20:22:58 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 067c30f5-6917-4596-998b-9f6069e03343 00:16:28.654 20:22:58 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:16:28.654 { 00:16:28.654 "name": "067c30f5-6917-4596-998b-9f6069e03343", 00:16:28.654 "aliases": [ 00:16:28.654 "lvs/nvme0n1p0" 00:16:28.654 ], 00:16:28.654 "product_name": "Logical Volume", 00:16:28.654 "block_size": 4096, 00:16:28.654 "num_blocks": 26476544, 00:16:28.654 "uuid": "067c30f5-6917-4596-998b-9f6069e03343", 00:16:28.654 "assigned_rate_limits": { 00:16:28.654 "rw_ios_per_sec": 0, 00:16:28.654 "rw_mbytes_per_sec": 0, 00:16:28.654 "r_mbytes_per_sec": 0, 00:16:28.654 "w_mbytes_per_sec": 0 00:16:28.654 }, 00:16:28.654 "claimed": false, 00:16:28.654 "zoned": false, 00:16:28.654 "supported_io_types": { 00:16:28.654 "read": true, 00:16:28.654 "write": true, 00:16:28.654 "unmap": true, 00:16:28.654 "write_zeroes": true, 00:16:28.654 "flush": false, 00:16:28.654 "reset": true, 00:16:28.654 "compare": false, 00:16:28.654 "compare_and_write": false, 00:16:28.654 "abort": false, 00:16:28.654 "nvme_admin": false, 00:16:28.654 "nvme_io": false 00:16:28.654 }, 00:16:28.654 "driver_specific": { 00:16:28.654 "lvol": { 00:16:28.654 "lvol_store_uuid": "698f1177-274f-4ef8-be65-2f3a4f3f144b", 00:16:28.654 "base_bdev": "nvme0n1", 00:16:28.654 "thin_provision": true, 00:16:28.654 "snapshot": false, 00:16:28.654 "clone": false, 00:16:28.654 "esnap_clone": false 00:16:28.654 } 00:16:28.654 } 00:16:28.654 } 00:16:28.654 ]' 00:16:28.654 20:22:58 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:16:28.654 20:22:58 -- common/autotest_common.sh@1369 -- # bs=4096 00:16:28.654 20:22:58 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:16:28.654 20:22:58 -- common/autotest_common.sh@1370 -- # nb=26476544 00:16:28.654 20:22:58 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:16:28.654 20:22:58 -- common/autotest_common.sh@1374 -- # echo 103424 00:16:28.654 20:22:58 -- ftl/common.sh@48 -- # cache_size=5171 00:16:28.654 20:22:58 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:28.912 20:22:58 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:28.912 20:22:58 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:28.912 20:22:58 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:28.912 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:28.912 20:22:58 -- ftl/fio.sh@56 -- # get_bdev_size 067c30f5-6917-4596-998b-9f6069e03343 00:16:28.912 20:22:58 -- common/autotest_common.sh@1364 -- # local bdev_name=067c30f5-6917-4596-998b-9f6069e03343 00:16:28.912 20:22:58 -- common/autotest_common.sh@1365 -- # local bdev_info 00:16:28.912 20:22:58 -- common/autotest_common.sh@1366 -- # local bs 00:16:28.912 20:22:58 -- common/autotest_common.sh@1367 -- # local nb 00:16:28.912 20:22:58 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 067c30f5-6917-4596-998b-9f6069e03343 00:16:29.170 20:22:59 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:16:29.170 { 00:16:29.170 "name": "067c30f5-6917-4596-998b-9f6069e03343", 00:16:29.170 "aliases": [ 00:16:29.170 "lvs/nvme0n1p0" 00:16:29.170 ], 00:16:29.170 "product_name": "Logical Volume", 00:16:29.170 "block_size": 4096, 00:16:29.170 "num_blocks": 26476544, 00:16:29.170 "uuid": "067c30f5-6917-4596-998b-9f6069e03343", 00:16:29.170 "assigned_rate_limits": { 00:16:29.170 "rw_ios_per_sec": 0, 00:16:29.170 "rw_mbytes_per_sec": 0, 00:16:29.170 "r_mbytes_per_sec": 0, 00:16:29.170 "w_mbytes_per_sec": 0 00:16:29.170 }, 00:16:29.170 "claimed": false, 00:16:29.170 "zoned": false, 00:16:29.170 "supported_io_types": { 00:16:29.170 "read": true, 00:16:29.170 "write": true, 00:16:29.170 "unmap": true, 00:16:29.170 "write_zeroes": true, 00:16:29.170 "flush": false, 00:16:29.170 "reset": true, 00:16:29.170 "compare": false, 00:16:29.170 "compare_and_write": false, 00:16:29.170 "abort": false, 00:16:29.170 "nvme_admin": false, 00:16:29.170 "nvme_io": false 00:16:29.170 }, 00:16:29.170 "driver_specific": { 00:16:29.170 "lvol": { 00:16:29.170 "lvol_store_uuid": "698f1177-274f-4ef8-be65-2f3a4f3f144b", 00:16:29.170 "base_bdev": "nvme0n1", 00:16:29.170 "thin_provision": true, 00:16:29.170 "snapshot": false, 00:16:29.170 "clone": false, 00:16:29.171 "esnap_clone": false 00:16:29.171 } 00:16:29.171 } 00:16:29.171 } 00:16:29.171 ]' 00:16:29.171 20:22:59 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:16:29.171 20:22:59 -- common/autotest_common.sh@1369 -- # bs=4096 00:16:29.171 20:22:59 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:16:29.171 20:22:59 -- common/autotest_common.sh@1370 -- # nb=26476544 00:16:29.171 20:22:59 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:16:29.171 20:22:59 -- common/autotest_common.sh@1374 -- # echo 103424 00:16:29.171 20:22:59 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:29.171 20:22:59 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:29.171 20:22:59 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 067c30f5-6917-4596-998b-9f6069e03343 -c nvc0n1p0 --l2p_dram_limit 60 00:16:29.430 [2024-04-24 20:22:59.413625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.430 [2024-04-24 20:22:59.413677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:29.430 [2024-04-24 20:22:59.413696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:29.430 [2024-04-24 20:22:59.413709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.430 [2024-04-24 20:22:59.413784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.430 [2024-04-24 20:22:59.413796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:29.430 [2024-04-24 20:22:59.413810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:16:29.430 [2024-04-24 20:22:59.413820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.430 [2024-04-24 20:22:59.413866] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:29.430 [2024-04-24 20:22:59.415058] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:29.430 [2024-04-24 20:22:59.415085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.430 [2024-04-24 20:22:59.415097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:29.430 [2024-04-24 20:22:59.415114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.238 ms 00:16:29.430 [2024-04-24 20:22:59.415125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.430 [2024-04-24 20:22:59.415225] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a35c558c-abfe-47dd-b7eb-bf820c6ae2aa 00:16:29.430 [2024-04-24 20:22:59.416685] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.430 [2024-04-24 20:22:59.416713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:29.430 [2024-04-24 20:22:59.416726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:29.430 [2024-04-24 20:22:59.416738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.430 [2024-04-24 20:22:59.424323] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.430 [2024-04-24 20:22:59.424357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:29.430 [2024-04-24 20:22:59.424370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.526 ms 00:16:29.430 [2024-04-24 20:22:59.424382] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.430 [2024-04-24 20:22:59.424500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.430 [2024-04-24 20:22:59.424517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:29.430 [2024-04-24 20:22:59.424528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:16:29.430 [2024-04-24 20:22:59.424540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.430 [2024-04-24 20:22:59.424618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.430 [2024-04-24 20:22:59.424639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:29.430 [2024-04-24 20:22:59.424650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:29.430 [2024-04-24 20:22:59.424662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.430 [2024-04-24 20:22:59.424699] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:29.430 [2024-04-24 20:22:59.430461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.430 [2024-04-24 20:22:59.430492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:29.430 [2024-04-24 20:22:59.430507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.775 ms 00:16:29.430 [2024-04-24 20:22:59.430517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.430 [2024-04-24 20:22:59.430564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.430 [2024-04-24 20:22:59.430575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:29.430 [2024-04-24 20:22:59.430587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:29.430 [2024-04-24 20:22:59.430597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.430 [2024-04-24 20:22:59.430660] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:29.430 [2024-04-24 20:22:59.430794] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:29.430 [2024-04-24 20:22:59.430812] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:29.430 [2024-04-24 20:22:59.430827] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:29.430 [2024-04-24 20:22:59.430845] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:29.430 [2024-04-24 20:22:59.430858] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:29.430 [2024-04-24 20:22:59.430886] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:29.430 [2024-04-24 20:22:59.430897] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:29.430 [2024-04-24 20:22:59.430910] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:29.430 [2024-04-24 20:22:59.430920] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:29.430 [2024-04-24 20:22:59.430936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.430 [2024-04-24 20:22:59.430947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:29.430 [2024-04-24 20:22:59.430960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:16:29.430 [2024-04-24 20:22:59.430971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.430 [2024-04-24 20:22:59.431043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.430 [2024-04-24 20:22:59.431054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:29.430 [2024-04-24 20:22:59.431067] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:29.430 [2024-04-24 20:22:59.431080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.430 [2024-04-24 20:22:59.431185] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:29.430 [2024-04-24 20:22:59.431200] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:29.430 [2024-04-24 20:22:59.431220] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:29.430 [2024-04-24 20:22:59.431233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.430 [2024-04-24 20:22:59.431247] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:29.430 [2024-04-24 20:22:59.431256] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:29.430 [2024-04-24 20:22:59.431268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:29.431 [2024-04-24 20:22:59.431278] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:29.431 [2024-04-24 20:22:59.431290] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:29.431 [2024-04-24 20:22:59.431300] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:29.431 [2024-04-24 20:22:59.431312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:29.431 [2024-04-24 20:22:59.431321] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:29.431 [2024-04-24 20:22:59.431335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:29.431 [2024-04-24 20:22:59.431344] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:29.431 [2024-04-24 20:22:59.431356] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:29.431 [2024-04-24 20:22:59.431366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.431 [2024-04-24 20:22:59.431377] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:29.431 [2024-04-24 20:22:59.431387] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:29.431 [2024-04-24 20:22:59.431401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.431 [2024-04-24 20:22:59.431411] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:29.431 [2024-04-24 20:22:59.431422] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:29.431 [2024-04-24 20:22:59.431432] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:29.431 [2024-04-24 20:22:59.431444] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:29.431 [2024-04-24 20:22:59.431453] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:29.431 [2024-04-24 20:22:59.431465] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:29.431 [2024-04-24 20:22:59.431474] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:29.431 [2024-04-24 20:22:59.431486] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:29.431 [2024-04-24 20:22:59.431495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:29.431 [2024-04-24 20:22:59.431507] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:29.431 [2024-04-24 20:22:59.431516] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:29.431 [2024-04-24 20:22:59.431528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:29.431 [2024-04-24 20:22:59.431537] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:29.431 [2024-04-24 20:22:59.431549] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:29.431 [2024-04-24 20:22:59.431558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:29.431 [2024-04-24 20:22:59.431572] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:29.431 [2024-04-24 20:22:59.431583] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:29.431 [2024-04-24 20:22:59.431611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:29.431 [2024-04-24 20:22:59.431621] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:29.431 [2024-04-24 20:22:59.431632] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:29.431 [2024-04-24 20:22:59.431643] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:29.431 [2024-04-24 20:22:59.431654] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:29.431 [2024-04-24 20:22:59.431665] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:29.431 [2024-04-24 20:22:59.431677] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:29.431 [2024-04-24 20:22:59.431687] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:29.431 [2024-04-24 20:22:59.431704] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:29.431 [2024-04-24 20:22:59.431714] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:29.431 [2024-04-24 20:22:59.431726] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:29.431 [2024-04-24 20:22:59.431735] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:29.431 [2024-04-24 20:22:59.431747] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:29.431 [2024-04-24 20:22:59.431757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:29.431 [2024-04-24 20:22:59.431773] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:29.431 [2024-04-24 20:22:59.431786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:29.431 [2024-04-24 20:22:59.431800] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:29.431 [2024-04-24 20:22:59.431811] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:29.431 [2024-04-24 20:22:59.431837] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:29.431 [2024-04-24 20:22:59.431847] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:29.431 [2024-04-24 20:22:59.431860] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:29.431 [2024-04-24 20:22:59.431880] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:29.431 [2024-04-24 20:22:59.431892] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:29.431 [2024-04-24 20:22:59.431902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:29.431 [2024-04-24 20:22:59.431915] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:29.431 [2024-04-24 20:22:59.431925] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:29.431 [2024-04-24 20:22:59.431939] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:29.431 [2024-04-24 20:22:59.431949] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:29.431 [2024-04-24 20:22:59.431962] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:29.431 [2024-04-24 20:22:59.431972] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:29.431 [2024-04-24 20:22:59.431988] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:29.431 [2024-04-24 20:22:59.432000] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:29.431 [2024-04-24 20:22:59.432013] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:29.431 [2024-04-24 20:22:59.432023] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:29.431 [2024-04-24 20:22:59.432035] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:29.431 [2024-04-24 20:22:59.432046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.431 [2024-04-24 20:22:59.432059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:29.431 [2024-04-24 20:22:59.432070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.903 ms 00:16:29.431 [2024-04-24 20:22:59.432082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.431 [2024-04-24 20:22:59.457183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.431 [2024-04-24 20:22:59.457229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:29.431 [2024-04-24 20:22:59.457245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.064 ms 00:16:29.431 [2024-04-24 20:22:59.457257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.431 [2024-04-24 20:22:59.457353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.431 [2024-04-24 20:22:59.457366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:29.431 [2024-04-24 20:22:59.457379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:29.431 [2024-04-24 20:22:59.457394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.431 [2024-04-24 20:22:59.511068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.431 [2024-04-24 20:22:59.511121] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:29.431 [2024-04-24 20:22:59.511136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.683 ms 00:16:29.431 [2024-04-24 20:22:59.511149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.431 [2024-04-24 20:22:59.511213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.431 [2024-04-24 20:22:59.511228] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:29.431 [2024-04-24 20:22:59.511239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:29.431 [2024-04-24 20:22:59.511251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.431 [2024-04-24 20:22:59.511754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.431 [2024-04-24 20:22:59.511770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:29.431 [2024-04-24 20:22:59.511781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:16:29.431 [2024-04-24 20:22:59.511795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.431 [2024-04-24 20:22:59.511933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.431 [2024-04-24 20:22:59.511951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:29.431 [2024-04-24 20:22:59.511962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:16:29.431 [2024-04-24 20:22:59.511977] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.431 [2024-04-24 20:22:59.547199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.431 [2024-04-24 20:22:59.547252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:29.431 [2024-04-24 20:22:59.547267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.247 ms 00:16:29.431 [2024-04-24 20:22:59.547280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.431 [2024-04-24 20:22:59.561013] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:29.431 [2024-04-24 20:22:59.577655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.431 [2024-04-24 20:22:59.577707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:29.431 [2024-04-24 20:22:59.577725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.263 ms 00:16:29.432 [2024-04-24 20:22:59.577736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.730 [2024-04-24 20:22:59.667235] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:29.730 [2024-04-24 20:22:59.667296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:29.730 [2024-04-24 20:22:59.667317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.576 ms 00:16:29.730 [2024-04-24 20:22:59.667327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:29.730 [2024-04-24 20:22:59.667392] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:29.730 [2024-04-24 20:22:59.667406] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:35.066 [2024-04-24 20:23:04.163780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.066 [2024-04-24 20:23:04.163846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:35.066 [2024-04-24 20:23:04.163904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4503.678 ms 00:16:35.066 [2024-04-24 20:23:04.163915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.066 [2024-04-24 20:23:04.164147] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.066 [2024-04-24 20:23:04.164173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:35.066 [2024-04-24 20:23:04.164191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:16:35.066 [2024-04-24 20:23:04.164202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.066 [2024-04-24 20:23:04.203989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.066 [2024-04-24 20:23:04.204050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:35.066 [2024-04-24 20:23:04.204070] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.770 ms 00:16:35.066 [2024-04-24 20:23:04.204081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.066 [2024-04-24 20:23:04.243494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.066 [2024-04-24 20:23:04.243548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:35.066 [2024-04-24 20:23:04.243568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.400 ms 00:16:35.066 [2024-04-24 20:23:04.243579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.066 [2024-04-24 20:23:04.244037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.066 [2024-04-24 20:23:04.244054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:35.066 [2024-04-24 20:23:04.244069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:16:35.066 [2024-04-24 20:23:04.244092] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.066 [2024-04-24 20:23:04.340989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.066 [2024-04-24 20:23:04.341045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:35.066 [2024-04-24 20:23:04.341065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.969 ms 00:16:35.066 [2024-04-24 20:23:04.341077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.066 [2024-04-24 20:23:04.382482] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.066 [2024-04-24 20:23:04.382542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:35.066 [2024-04-24 20:23:04.382577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.401 ms 00:16:35.066 [2024-04-24 20:23:04.382589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.066 [2024-04-24 20:23:04.387030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.066 [2024-04-24 20:23:04.387057] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:35.066 [2024-04-24 20:23:04.387073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.386 ms 00:16:35.066 [2024-04-24 20:23:04.387099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.066 [2024-04-24 20:23:04.426773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.066 [2024-04-24 20:23:04.426825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:35.066 [2024-04-24 20:23:04.426844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.638 ms 00:16:35.066 [2024-04-24 20:23:04.426861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.066 [2024-04-24 20:23:04.426929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.067 [2024-04-24 20:23:04.426941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:35.067 [2024-04-24 20:23:04.426954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:35.067 [2024-04-24 20:23:04.426964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.067 [2024-04-24 20:23:04.427119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.067 [2024-04-24 20:23:04.427133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:35.067 [2024-04-24 20:23:04.427147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:16:35.067 [2024-04-24 20:23:04.427157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.067 [2024-04-24 20:23:04.428383] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5022.396 ms, result 0 00:16:35.067 { 00:16:35.067 "name": "ftl0", 00:16:35.067 "uuid": "a35c558c-abfe-47dd-b7eb-bf820c6ae2aa" 00:16:35.067 } 00:16:35.067 20:23:04 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:35.067 20:23:04 -- common/autotest_common.sh@885 -- # local bdev_name=ftl0 00:16:35.067 20:23:04 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:35.067 20:23:04 -- common/autotest_common.sh@887 -- # local i 00:16:35.067 20:23:04 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:35.067 20:23:04 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:35.067 20:23:04 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:35.067 20:23:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:35.067 [ 00:16:35.067 { 00:16:35.067 "name": "ftl0", 00:16:35.067 "aliases": [ 00:16:35.067 "a35c558c-abfe-47dd-b7eb-bf820c6ae2aa" 00:16:35.067 ], 00:16:35.067 "product_name": "FTL disk", 00:16:35.067 "block_size": 4096, 00:16:35.067 "num_blocks": 20971520, 00:16:35.067 "uuid": "a35c558c-abfe-47dd-b7eb-bf820c6ae2aa", 00:16:35.067 "assigned_rate_limits": { 00:16:35.067 "rw_ios_per_sec": 0, 00:16:35.067 "rw_mbytes_per_sec": 0, 00:16:35.067 "r_mbytes_per_sec": 0, 00:16:35.067 "w_mbytes_per_sec": 0 00:16:35.067 }, 00:16:35.067 "claimed": false, 00:16:35.067 "zoned": false, 00:16:35.067 "supported_io_types": { 00:16:35.067 "read": true, 00:16:35.067 "write": true, 00:16:35.067 "unmap": true, 00:16:35.067 "write_zeroes": true, 00:16:35.067 "flush": true, 00:16:35.067 "reset": false, 00:16:35.067 "compare": false, 00:16:35.067 "compare_and_write": false, 00:16:35.067 "abort": false, 00:16:35.067 "nvme_admin": false, 00:16:35.067 "nvme_io": false 00:16:35.067 }, 00:16:35.067 "driver_specific": { 00:16:35.067 "ftl": { 00:16:35.067 "base_bdev": "067c30f5-6917-4596-998b-9f6069e03343", 00:16:35.067 "cache": "nvc0n1p0" 00:16:35.067 } 00:16:35.067 } 00:16:35.067 } 00:16:35.067 ] 00:16:35.067 20:23:04 -- common/autotest_common.sh@893 -- # return 0 00:16:35.067 20:23:04 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:35.067 20:23:04 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:35.067 20:23:05 -- ftl/fio.sh@70 -- # echo ']}' 00:16:35.067 20:23:05 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:35.067 [2024-04-24 20:23:05.219722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.067 [2024-04-24 20:23:05.219782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:35.067 [2024-04-24 20:23:05.219799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:35.067 [2024-04-24 20:23:05.219817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.067 [2024-04-24 20:23:05.219869] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:35.067 [2024-04-24 20:23:05.223876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.067 [2024-04-24 20:23:05.223913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:35.067 [2024-04-24 20:23:05.223930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.987 ms 00:16:35.067 [2024-04-24 20:23:05.223941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.067 [2024-04-24 20:23:05.224475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.067 [2024-04-24 20:23:05.224499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:35.067 [2024-04-24 20:23:05.224513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:16:35.067 [2024-04-24 20:23:05.224527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.067 [2024-04-24 20:23:05.227083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.067 [2024-04-24 20:23:05.227103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:35.067 [2024-04-24 20:23:05.227121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.530 ms 00:16:35.067 [2024-04-24 20:23:05.227131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.067 [2024-04-24 20:23:05.232345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.067 [2024-04-24 20:23:05.232372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:35.067 [2024-04-24 20:23:05.232391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.185 ms 00:16:35.067 [2024-04-24 20:23:05.232401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.067 [2024-04-24 20:23:05.271914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.067 [2024-04-24 20:23:05.271983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:35.067 [2024-04-24 20:23:05.272001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.466 ms 00:16:35.067 [2024-04-24 20:23:05.272012] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.067 [2024-04-24 20:23:05.295333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.067 [2024-04-24 20:23:05.295381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:35.067 [2024-04-24 20:23:05.295401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.295 ms 00:16:35.067 [2024-04-24 20:23:05.295413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.067 [2024-04-24 20:23:05.295633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.067 [2024-04-24 20:23:05.295668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:35.067 [2024-04-24 20:23:05.295683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:16:35.067 [2024-04-24 20:23:05.295694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.326 [2024-04-24 20:23:05.335437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.326 [2024-04-24 20:23:05.335487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:35.326 [2024-04-24 20:23:05.335511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.770 ms 00:16:35.326 [2024-04-24 20:23:05.335522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.326 [2024-04-24 20:23:05.374372] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.326 [2024-04-24 20:23:05.374430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:35.326 [2024-04-24 20:23:05.374449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.839 ms 00:16:35.326 [2024-04-24 20:23:05.374459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.326 [2024-04-24 20:23:05.414469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.326 [2024-04-24 20:23:05.414521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:35.326 [2024-04-24 20:23:05.414539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.999 ms 00:16:35.326 [2024-04-24 20:23:05.414549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.326 [2024-04-24 20:23:05.453021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.327 [2024-04-24 20:23:05.453075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:35.327 [2024-04-24 20:23:05.453094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.383 ms 00:16:35.327 [2024-04-24 20:23:05.453104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.327 [2024-04-24 20:23:05.453165] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:35.327 [2024-04-24 20:23:05.453183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.453992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:35.327 [2024-04-24 20:23:05.454223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:35.328 [2024-04-24 20:23:05.454470] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:35.328 [2024-04-24 20:23:05.454483] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a35c558c-abfe-47dd-b7eb-bf820c6ae2aa 00:16:35.328 [2024-04-24 20:23:05.454494] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:35.328 [2024-04-24 20:23:05.454506] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:35.328 [2024-04-24 20:23:05.454515] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:35.328 [2024-04-24 20:23:05.454528] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:35.328 [2024-04-24 20:23:05.454537] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:35.328 [2024-04-24 20:23:05.454553] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:35.328 [2024-04-24 20:23:05.454563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:35.328 [2024-04-24 20:23:05.454574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:35.328 [2024-04-24 20:23:05.454583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:35.328 [2024-04-24 20:23:05.454596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.328 [2024-04-24 20:23:05.454606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:35.328 [2024-04-24 20:23:05.454622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.435 ms 00:16:35.328 [2024-04-24 20:23:05.454632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.328 [2024-04-24 20:23:05.474861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.328 [2024-04-24 20:23:05.474914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:35.328 [2024-04-24 20:23:05.474945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.172 ms 00:16:35.328 [2024-04-24 20:23:05.474959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.328 [2024-04-24 20:23:05.475268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:35.328 [2024-04-24 20:23:05.475284] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:35.328 [2024-04-24 20:23:05.475298] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:16:35.328 [2024-04-24 20:23:05.475308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.328 [2024-04-24 20:23:05.545960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.328 [2024-04-24 20:23:05.546013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:35.328 [2024-04-24 20:23:05.546035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.328 [2024-04-24 20:23:05.546045] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.328 [2024-04-24 20:23:05.546126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.328 [2024-04-24 20:23:05.546138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:35.328 [2024-04-24 20:23:05.546154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.328 [2024-04-24 20:23:05.546164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.328 [2024-04-24 20:23:05.546279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.328 [2024-04-24 20:23:05.546293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:35.328 [2024-04-24 20:23:05.546307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.328 [2024-04-24 20:23:05.546320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.328 [2024-04-24 20:23:05.546362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.328 [2024-04-24 20:23:05.546372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:35.328 [2024-04-24 20:23:05.546385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.328 [2024-04-24 20:23:05.546395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.588 [2024-04-24 20:23:05.686104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.588 [2024-04-24 20:23:05.686168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:35.588 [2024-04-24 20:23:05.686187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.588 [2024-04-24 20:23:05.686200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.588 [2024-04-24 20:23:05.735022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.588 [2024-04-24 20:23:05.735088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:35.588 [2024-04-24 20:23:05.735110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.588 [2024-04-24 20:23:05.735120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.588 [2024-04-24 20:23:05.735227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.588 [2024-04-24 20:23:05.735239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:35.588 [2024-04-24 20:23:05.735252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.588 [2024-04-24 20:23:05.735262] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.588 [2024-04-24 20:23:05.735338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.588 [2024-04-24 20:23:05.735350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:35.588 [2024-04-24 20:23:05.735363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.588 [2024-04-24 20:23:05.735373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.588 [2024-04-24 20:23:05.735514] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.588 [2024-04-24 20:23:05.735528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:35.588 [2024-04-24 20:23:05.735541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.588 [2024-04-24 20:23:05.735551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.588 [2024-04-24 20:23:05.735609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.588 [2024-04-24 20:23:05.735623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:35.588 [2024-04-24 20:23:05.735639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.588 [2024-04-24 20:23:05.735648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.588 [2024-04-24 20:23:05.735697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.588 [2024-04-24 20:23:05.735709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:35.588 [2024-04-24 20:23:05.735721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.588 [2024-04-24 20:23:05.735732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.588 [2024-04-24 20:23:05.735788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:35.588 [2024-04-24 20:23:05.735800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:35.588 [2024-04-24 20:23:05.735813] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:35.588 [2024-04-24 20:23:05.735826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:35.588 [2024-04-24 20:23:05.736021] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 517.109 ms, result 0 00:16:35.588 true 00:16:35.588 20:23:05 -- ftl/fio.sh@75 -- # killprocess 77081 00:16:35.588 20:23:05 -- common/autotest_common.sh@936 -- # '[' -z 77081 ']' 00:16:35.588 20:23:05 -- common/autotest_common.sh@940 -- # kill -0 77081 00:16:35.588 20:23:05 -- common/autotest_common.sh@941 -- # uname 00:16:35.588 20:23:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:35.588 20:23:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77081 00:16:35.588 killing process with pid 77081 00:16:35.588 20:23:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:35.588 20:23:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:35.588 20:23:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77081' 00:16:35.588 20:23:05 -- common/autotest_common.sh@955 -- # kill 77081 00:16:35.588 20:23:05 -- common/autotest_common.sh@960 -- # wait 77081 00:16:40.860 20:23:10 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:40.860 20:23:10 -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:40.860 20:23:10 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:40.860 20:23:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:40.860 20:23:10 -- common/autotest_common.sh@10 -- # set +x 00:16:40.860 20:23:10 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:40.860 20:23:10 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:40.860 20:23:10 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:16:40.860 20:23:10 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:40.860 20:23:10 -- common/autotest_common.sh@1325 -- # local sanitizers 00:16:40.860 20:23:10 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:40.860 20:23:10 -- common/autotest_common.sh@1327 -- # shift 00:16:40.860 20:23:10 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:16:40.860 20:23:10 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:16:40.860 20:23:10 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:40.860 20:23:10 -- common/autotest_common.sh@1331 -- # grep libasan 00:16:40.860 20:23:10 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:16:40.860 20:23:10 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:40.860 20:23:10 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:40.860 20:23:10 -- common/autotest_common.sh@1333 -- # break 00:16:40.860 20:23:10 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:40.860 20:23:10 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:40.860 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:40.860 fio-3.35 00:16:40.860 Starting 1 thread 00:16:46.150 00:16:46.150 test: (groupid=0, jobs=1): err= 0: pid=77304: Wed Apr 24 20:23:16 2024 00:16:46.150 read: IOPS=1046, BW=69.5MiB/s (72.8MB/s)(255MiB/3664msec) 00:16:46.150 slat (nsec): min=4304, max=27968, avg=6404.75, stdev=2745.13 00:16:46.150 clat (usec): min=274, max=851, avg=430.07, stdev=58.30 00:16:46.150 lat (usec): min=282, max=862, avg=436.48, stdev=58.64 00:16:46.150 clat percentiles (usec): 00:16:46.150 | 1.00th=[ 318], 5.00th=[ 334], 10.00th=[ 371], 20.00th=[ 388], 00:16:46.150 | 30.00th=[ 392], 40.00th=[ 404], 50.00th=[ 420], 60.00th=[ 449], 00:16:46.150 | 70.00th=[ 457], 80.00th=[ 474], 90.00th=[ 515], 95.00th=[ 529], 00:16:46.150 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 742], 99.95th=[ 807], 00:16:46.150 | 99.99th=[ 848] 00:16:46.150 write: IOPS=1053, BW=70.0MiB/s (73.4MB/s)(256MiB/3660msec); 0 zone resets 00:16:46.150 slat (nsec): min=15106, max=80820, avg=19503.16, stdev=4605.99 00:16:46.150 clat (usec): min=326, max=2198, avg=486.01, stdev=74.87 00:16:46.150 lat (usec): min=342, max=2216, avg=505.52, stdev=75.12 00:16:46.150 clat percentiles (usec): 00:16:46.150 | 1.00th=[ 359], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 420], 00:16:46.150 | 30.00th=[ 441], 40.00th=[ 469], 50.00th=[ 478], 60.00th=[ 490], 00:16:46.150 | 70.00th=[ 510], 80.00th=[ 545], 90.00th=[ 562], 95.00th=[ 611], 00:16:46.150 | 99.00th=[ 717], 99.50th=[ 775], 99.90th=[ 857], 99.95th=[ 881], 00:16:46.150 | 99.99th=[ 2212] 00:16:46.150 bw ( KiB/s): min=69632, max=75888, per=99.77%, avg=71477.71, stdev=2160.16, samples=7 00:16:46.150 iops : min= 1024, max= 1116, avg=1051.14, stdev=31.77, samples=7 00:16:46.150 lat (usec) : 500=76.67%, 750=22.93%, 1000=0.39% 00:16:46.150 lat (msec) : 4=0.01% 00:16:46.150 cpu : usr=99.15%, sys=0.16%, ctx=7, majf=0, minf=1171 00:16:46.150 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:46.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.151 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:46.151 00:16:46.151 Run status group 0 (all jobs): 00:16:46.151 READ: bw=69.5MiB/s (72.8MB/s), 69.5MiB/s-69.5MiB/s (72.8MB/s-72.8MB/s), io=255MiB (267MB), run=3664-3664msec 00:16:46.151 WRITE: bw=70.0MiB/s (73.4MB/s), 70.0MiB/s-70.0MiB/s (73.4MB/s-73.4MB/s), io=256MiB (269MB), run=3660-3660msec 00:16:48.056 ----------------------------------------------------- 00:16:48.056 Suppressions used: 00:16:48.056 count bytes template 00:16:48.056 1 5 /usr/src/fio/parse.c 00:16:48.056 1 8 libtcmalloc_minimal.so 00:16:48.056 1 904 libcrypto.so 00:16:48.056 ----------------------------------------------------- 00:16:48.056 00:16:48.316 20:23:18 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:48.316 20:23:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:48.316 20:23:18 -- common/autotest_common.sh@10 -- # set +x 00:16:48.316 20:23:18 -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:48.316 20:23:18 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:48.316 20:23:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:48.316 20:23:18 -- common/autotest_common.sh@10 -- # set +x 00:16:48.316 20:23:18 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:48.316 20:23:18 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:48.316 20:23:18 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:16:48.316 20:23:18 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:48.316 20:23:18 -- common/autotest_common.sh@1325 -- # local sanitizers 00:16:48.316 20:23:18 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:48.316 20:23:18 -- common/autotest_common.sh@1327 -- # shift 00:16:48.316 20:23:18 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:16:48.316 20:23:18 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:16:48.316 20:23:18 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:48.316 20:23:18 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:16:48.316 20:23:18 -- common/autotest_common.sh@1331 -- # grep libasan 00:16:48.316 20:23:18 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:48.316 20:23:18 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:48.316 20:23:18 -- common/autotest_common.sh@1333 -- # break 00:16:48.316 20:23:18 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:48.316 20:23:18 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:48.575 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:48.575 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:48.575 fio-3.35 00:16:48.575 Starting 2 threads 00:17:20.762 00:17:20.762 first_half: (groupid=0, jobs=1): err= 0: pid=77409: Wed Apr 24 20:23:47 2024 00:17:20.762 read: IOPS=2388, BW=9556KiB/s (9785kB/s)(256MiB/27407msec) 00:17:20.762 slat (nsec): min=3626, max=50876, avg=9624.49, stdev=4347.35 00:17:20.762 clat (usec): min=722, max=342210, avg=44992.26, stdev=31912.77 00:17:20.762 lat (usec): min=734, max=342217, avg=45001.89, stdev=31913.32 00:17:20.762 clat percentiles (msec): 00:17:20.762 | 1.00th=[ 12], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 37], 00:17:20.762 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 38], 60.00th=[ 39], 00:17:20.762 | 70.00th=[ 40], 80.00th=[ 44], 90.00th=[ 47], 95.00th=[ 94], 00:17:20.762 | 99.00th=[ 218], 99.50th=[ 241], 99.90th=[ 275], 99.95th=[ 300], 00:17:20.762 | 99.99th=[ 338] 00:17:20.762 write: IOPS=2394, BW=9580KiB/s (9809kB/s)(256MiB/27365msec); 0 zone resets 00:17:20.762 slat (usec): min=4, max=865, avg= 9.55, stdev= 6.79 00:17:20.762 clat (usec): min=453, max=97534, avg=8538.26, stdev=7826.43 00:17:20.762 lat (usec): min=470, max=97543, avg=8547.80, stdev=7826.50 00:17:20.762 clat percentiles (usec): 00:17:20.762 | 1.00th=[ 1221], 5.00th=[ 1598], 10.00th=[ 2114], 20.00th=[ 3818], 00:17:20.762 | 30.00th=[ 5276], 40.00th=[ 6456], 50.00th=[ 7111], 60.00th=[ 8029], 00:17:20.763 | 70.00th=[ 8848], 80.00th=[10421], 90.00th=[13173], 95.00th=[22414], 00:17:20.763 | 99.00th=[42730], 99.50th=[44827], 99.90th=[52167], 99.95th=[84411], 00:17:20.763 | 99.99th=[96994] 00:17:20.763 bw ( KiB/s): min= 1216, max=45242, per=100.00%, avg=20042.81, stdev=14574.89, samples=26 00:17:20.763 iops : min= 304, max=11310, avg=5010.62, stdev=3643.62, samples=26 00:17:20.763 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.11% 00:17:20.763 lat (msec) : 2=4.42%, 4=5.96%, 10=28.38%, 20=9.80%, 50=46.93% 00:17:20.763 lat (msec) : 100=1.99%, 250=2.23%, 500=0.13% 00:17:20.763 cpu : usr=99.16%, sys=0.20%, ctx=47, majf=0, minf=5554 00:17:20.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:20.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.763 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:20.763 issued rwts: total=65472,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:20.763 second_half: (groupid=0, jobs=1): err= 0: pid=77410: Wed Apr 24 20:23:47 2024 00:17:20.763 read: IOPS=2409, BW=9639KiB/s (9871kB/s)(256MiB/27177msec) 00:17:20.763 slat (nsec): min=3461, max=70487, avg=10463.27, stdev=4103.41 00:17:20.763 clat (msec): min=9, max=250, avg=45.24, stdev=28.34 00:17:20.763 lat (msec): min=9, max=250, avg=45.25, stdev=28.34 00:17:20.763 clat percentiles (msec): 00:17:20.763 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 37], 00:17:20.763 | 30.00th=[ 37], 40.00th=[ 38], 50.00th=[ 38], 60.00th=[ 39], 00:17:20.763 | 70.00th=[ 41], 80.00th=[ 44], 90.00th=[ 50], 95.00th=[ 87], 00:17:20.763 | 99.00th=[ 203], 99.50th=[ 215], 99.90th=[ 241], 99.95th=[ 245], 00:17:20.763 | 99.99th=[ 249] 00:17:20.763 write: IOPS=2424, BW=9697KiB/s (9930kB/s)(256MiB/27034msec); 0 zone resets 00:17:20.763 slat (usec): min=4, max=1878, avg= 9.77, stdev=12.34 00:17:20.763 clat (usec): min=471, max=47750, avg=7843.04, stdev=4431.10 00:17:20.763 lat (usec): min=476, max=47759, avg=7852.81, stdev=4431.15 00:17:20.763 clat percentiles (usec): 00:17:20.763 | 1.00th=[ 1221], 5.00th=[ 2343], 10.00th=[ 3359], 20.00th=[ 4752], 00:17:20.763 | 30.00th=[ 5997], 40.00th=[ 6718], 50.00th=[ 7439], 60.00th=[ 8029], 00:17:20.763 | 70.00th=[ 8717], 80.00th=[10159], 90.00th=[12518], 95.00th=[13698], 00:17:20.763 | 99.00th=[23200], 99.50th=[36963], 99.90th=[45351], 99.95th=[46400], 00:17:20.763 | 99.99th=[46924] 00:17:20.763 bw ( KiB/s): min= 976, max=40793, per=100.00%, avg=20957.92, stdev=13095.52, samples=25 00:17:20.763 iops : min= 244, max=10198, avg=5239.48, stdev=3273.86, samples=25 00:17:20.763 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.13% 00:17:20.763 lat (msec) : 2=1.55%, 4=5.18%, 10=32.56%, 20=9.96%, 50=45.87% 00:17:20.763 lat (msec) : 100=2.44%, 250=2.26%, 500=0.01% 00:17:20.763 cpu : usr=99.15%, sys=0.21%, ctx=41, majf=0, minf=5561 00:17:20.763 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:20.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.763 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:20.763 issued rwts: total=65492,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.763 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:20.763 00:17:20.763 Run status group 0 (all jobs): 00:17:20.763 READ: bw=18.7MiB/s (19.6MB/s), 9556KiB/s-9639KiB/s (9785kB/s-9871kB/s), io=512MiB (536MB), run=27177-27407msec 00:17:20.763 WRITE: bw=18.7MiB/s (19.6MB/s), 9580KiB/s-9697KiB/s (9809kB/s-9930kB/s), io=512MiB (537MB), run=27034-27365msec 00:17:20.763 ----------------------------------------------------- 00:17:20.763 Suppressions used: 00:17:20.763 count bytes template 00:17:20.763 2 10 /usr/src/fio/parse.c 00:17:20.763 3 288 /usr/src/fio/iolog.c 00:17:20.763 1 8 libtcmalloc_minimal.so 00:17:20.763 1 904 libcrypto.so 00:17:20.763 ----------------------------------------------------- 00:17:20.763 00:17:20.763 20:23:50 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:20.763 20:23:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:20.763 20:23:50 -- common/autotest_common.sh@10 -- # set +x 00:17:20.763 20:23:50 -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:20.763 20:23:50 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:20.763 20:23:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:20.763 20:23:50 -- common/autotest_common.sh@10 -- # set +x 00:17:20.763 20:23:50 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:20.763 20:23:50 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:20.763 20:23:50 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:17:20.763 20:23:50 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:20.763 20:23:50 -- common/autotest_common.sh@1325 -- # local sanitizers 00:17:20.763 20:23:50 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:20.763 20:23:50 -- common/autotest_common.sh@1327 -- # shift 00:17:20.763 20:23:50 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:17:20.763 20:23:50 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:17:20.763 20:23:50 -- common/autotest_common.sh@1331 -- # grep libasan 00:17:20.763 20:23:50 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:17:20.763 20:23:50 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:20.763 20:23:50 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:20.763 20:23:50 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:20.763 20:23:50 -- common/autotest_common.sh@1333 -- # break 00:17:20.763 20:23:50 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:20.763 20:23:50 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:20.763 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:20.763 fio-3.35 00:17:20.763 Starting 1 thread 00:17:35.676 00:17:35.676 test: (groupid=0, jobs=1): err= 0: pid=77767: Wed Apr 24 20:24:05 2024 00:17:35.676 read: IOPS=7633, BW=29.8MiB/s (31.3MB/s)(255MiB/8542msec) 00:17:35.676 slat (nsec): min=3342, max=46630, avg=5334.46, stdev=1613.22 00:17:35.676 clat (usec): min=657, max=34354, avg=16760.32, stdev=1661.37 00:17:35.676 lat (usec): min=661, max=34359, avg=16765.66, stdev=1661.42 00:17:35.676 clat percentiles (usec): 00:17:35.676 | 1.00th=[15270], 5.00th=[15533], 10.00th=[15664], 20.00th=[15926], 00:17:35.676 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16319], 60.00th=[16581], 00:17:35.676 | 70.00th=[16712], 80.00th=[17171], 90.00th=[18220], 95.00th=[19792], 00:17:35.676 | 99.00th=[25035], 99.50th=[27132], 99.90th=[29230], 99.95th=[30278], 00:17:35.676 | 99.99th=[33817] 00:17:35.676 write: IOPS=12.4k, BW=48.4MiB/s (50.8MB/s)(256MiB/5284msec); 0 zone resets 00:17:35.676 slat (usec): min=4, max=1641, avg= 8.20, stdev=12.94 00:17:35.676 clat (usec): min=591, max=70441, avg=10271.38, stdev=12949.71 00:17:35.676 lat (usec): min=598, max=70465, avg=10279.58, stdev=12949.72 00:17:35.676 clat percentiles (usec): 00:17:35.676 | 1.00th=[ 906], 5.00th=[ 1106], 10.00th=[ 1237], 20.00th=[ 1467], 00:17:35.677 | 30.00th=[ 1696], 40.00th=[ 2573], 50.00th=[ 6325], 60.00th=[ 7439], 00:17:35.677 | 70.00th=[ 8979], 80.00th=[12125], 90.00th=[34866], 95.00th=[38011], 00:17:35.677 | 99.00th=[53216], 99.50th=[58983], 99.90th=[65274], 99.95th=[67634], 00:17:35.677 | 99.99th=[69731] 00:17:35.677 bw ( KiB/s): min=24472, max=70816, per=96.07%, avg=47662.55, stdev=12193.01, samples=11 00:17:35.677 iops : min= 6118, max=17704, avg=11915.64, stdev=3048.25, samples=11 00:17:35.677 lat (usec) : 750=0.06%, 1000=1.10% 00:17:35.677 lat (msec) : 2=17.27%, 4=2.63%, 10=16.20%, 20=52.67%, 50=9.36% 00:17:35.677 lat (msec) : 100=0.72% 00:17:35.677 cpu : usr=98.81%, sys=0.39%, ctx=56, majf=0, minf=5567 00:17:35.677 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:35.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:35.677 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:35.677 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:35.677 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:35.677 00:17:35.677 Run status group 0 (all jobs): 00:17:35.677 READ: bw=29.8MiB/s (31.3MB/s), 29.8MiB/s-29.8MiB/s (31.3MB/s-31.3MB/s), io=255MiB (267MB), run=8542-8542msec 00:17:35.677 WRITE: bw=48.4MiB/s (50.8MB/s), 48.4MiB/s-48.4MiB/s (50.8MB/s-50.8MB/s), io=256MiB (268MB), run=5284-5284msec 00:17:37.667 ----------------------------------------------------- 00:17:37.667 Suppressions used: 00:17:37.667 count bytes template 00:17:37.667 1 5 /usr/src/fio/parse.c 00:17:37.667 2 192 /usr/src/fio/iolog.c 00:17:37.667 1 8 libtcmalloc_minimal.so 00:17:37.667 1 904 libcrypto.so 00:17:37.667 ----------------------------------------------------- 00:17:37.667 00:17:37.667 20:24:07 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:37.667 20:24:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:37.667 20:24:07 -- common/autotest_common.sh@10 -- # set +x 00:17:37.667 20:24:07 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:37.667 Remove shared memory files 00:17:37.667 20:24:07 -- ftl/fio.sh@85 -- # remove_shm 00:17:37.667 20:24:07 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:37.667 20:24:07 -- ftl/common.sh@205 -- # rm -f rm -f 00:17:37.667 20:24:07 -- ftl/common.sh@206 -- # rm -f rm -f 00:17:37.667 20:24:07 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid61563 /dev/shm/spdk_tgt_trace.pid75975 00:17:37.667 20:24:07 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:37.667 20:24:07 -- ftl/common.sh@209 -- # rm -f rm -f 00:17:37.667 ************************************ 00:17:37.667 END TEST ftl_fio_basic 00:17:37.667 ************************************ 00:17:37.667 00:17:37.667 real 1m12.896s 00:17:37.667 user 2m39.551s 00:17:37.667 sys 0m3.913s 00:17:37.667 20:24:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:37.667 20:24:07 -- common/autotest_common.sh@10 -- # set +x 00:17:37.927 20:24:07 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:37.927 20:24:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:37.927 20:24:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:37.927 20:24:07 -- common/autotest_common.sh@10 -- # set +x 00:17:37.927 ************************************ 00:17:37.927 START TEST ftl_bdevperf 00:17:37.927 ************************************ 00:17:37.927 20:24:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:37.927 * Looking for test storage... 00:17:38.187 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:38.187 20:24:08 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:38.187 20:24:08 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:38.187 20:24:08 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:38.187 20:24:08 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:38.187 20:24:08 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:38.187 20:24:08 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:38.187 20:24:08 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:38.187 20:24:08 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:38.187 20:24:08 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:38.187 20:24:08 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:38.187 20:24:08 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:38.187 20:24:08 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:38.187 20:24:08 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:38.187 20:24:08 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:38.187 20:24:08 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:38.187 20:24:08 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:38.187 20:24:08 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:38.187 20:24:08 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:38.187 20:24:08 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:38.187 20:24:08 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:38.187 20:24:08 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:38.187 20:24:08 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:38.187 20:24:08 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:38.187 20:24:08 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:38.187 20:24:08 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:38.187 20:24:08 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:38.187 20:24:08 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:38.187 20:24:08 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:38.187 20:24:08 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:38.187 20:24:08 -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:38.187 20:24:08 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:38.187 20:24:08 -- ftl/bdevperf.sh@13 -- # use_append= 00:17:38.187 20:24:08 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:38.187 20:24:08 -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:38.187 20:24:08 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:38.187 20:24:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:38.187 20:24:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.187 20:24:08 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=78010 00:17:38.187 20:24:08 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:38.187 20:24:08 -- ftl/bdevperf.sh@22 -- # waitforlisten 78010 00:17:38.187 20:24:08 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:38.187 20:24:08 -- common/autotest_common.sh@817 -- # '[' -z 78010 ']' 00:17:38.187 20:24:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.187 20:24:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:38.187 20:24:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.187 20:24:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:38.187 20:24:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.187 [2024-04-24 20:24:08.303470] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:17:38.187 [2024-04-24 20:24:08.303795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78010 ] 00:17:38.446 [2024-04-24 20:24:08.479563] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.705 [2024-04-24 20:24:08.727385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.964 20:24:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:38.964 20:24:09 -- common/autotest_common.sh@850 -- # return 0 00:17:38.964 20:24:09 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:38.964 20:24:09 -- ftl/common.sh@54 -- # local name=nvme0 00:17:38.964 20:24:09 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:38.964 20:24:09 -- ftl/common.sh@56 -- # local size=103424 00:17:38.964 20:24:09 -- ftl/common.sh@59 -- # local base_bdev 00:17:38.964 20:24:09 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:39.533 20:24:09 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:39.533 20:24:09 -- ftl/common.sh@62 -- # local base_size 00:17:39.533 20:24:09 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:39.533 20:24:09 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:17:39.533 20:24:09 -- common/autotest_common.sh@1365 -- # local bdev_info 00:17:39.533 20:24:09 -- common/autotest_common.sh@1366 -- # local bs 00:17:39.533 20:24:09 -- common/autotest_common.sh@1367 -- # local nb 00:17:39.533 20:24:09 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:39.533 20:24:09 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:17:39.533 { 00:17:39.533 "name": "nvme0n1", 00:17:39.533 "aliases": [ 00:17:39.533 "7be57737-359b-4d36-9b2b-8817328894c7" 00:17:39.533 ], 00:17:39.533 "product_name": "NVMe disk", 00:17:39.533 "block_size": 4096, 00:17:39.533 "num_blocks": 1310720, 00:17:39.533 "uuid": "7be57737-359b-4d36-9b2b-8817328894c7", 00:17:39.533 "assigned_rate_limits": { 00:17:39.533 "rw_ios_per_sec": 0, 00:17:39.533 "rw_mbytes_per_sec": 0, 00:17:39.533 "r_mbytes_per_sec": 0, 00:17:39.533 "w_mbytes_per_sec": 0 00:17:39.533 }, 00:17:39.533 "claimed": true, 00:17:39.533 "claim_type": "read_many_write_one", 00:17:39.533 "zoned": false, 00:17:39.533 "supported_io_types": { 00:17:39.534 "read": true, 00:17:39.534 "write": true, 00:17:39.534 "unmap": true, 00:17:39.534 "write_zeroes": true, 00:17:39.534 "flush": true, 00:17:39.534 "reset": true, 00:17:39.534 "compare": true, 00:17:39.534 "compare_and_write": false, 00:17:39.534 "abort": true, 00:17:39.534 "nvme_admin": true, 00:17:39.534 "nvme_io": true 00:17:39.534 }, 00:17:39.534 "driver_specific": { 00:17:39.534 "nvme": [ 00:17:39.534 { 00:17:39.534 "pci_address": "0000:00:11.0", 00:17:39.534 "trid": { 00:17:39.534 "trtype": "PCIe", 00:17:39.534 "traddr": "0000:00:11.0" 00:17:39.534 }, 00:17:39.534 "ctrlr_data": { 00:17:39.534 "cntlid": 0, 00:17:39.534 "vendor_id": "0x1b36", 00:17:39.534 "model_number": "QEMU NVMe Ctrl", 00:17:39.534 "serial_number": "12341", 00:17:39.534 "firmware_revision": "8.0.0", 00:17:39.534 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:39.534 "oacs": { 00:17:39.534 "security": 0, 00:17:39.534 "format": 1, 00:17:39.534 "firmware": 0, 00:17:39.534 "ns_manage": 1 00:17:39.534 }, 00:17:39.534 "multi_ctrlr": false, 00:17:39.534 "ana_reporting": false 00:17:39.534 }, 00:17:39.534 "vs": { 00:17:39.534 "nvme_version": "1.4" 00:17:39.534 }, 00:17:39.534 "ns_data": { 00:17:39.534 "id": 1, 00:17:39.534 "can_share": false 00:17:39.534 } 00:17:39.534 } 00:17:39.534 ], 00:17:39.534 "mp_policy": "active_passive" 00:17:39.534 } 00:17:39.534 } 00:17:39.534 ]' 00:17:39.534 20:24:09 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:17:39.534 20:24:09 -- common/autotest_common.sh@1369 -- # bs=4096 00:17:39.534 20:24:09 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:17:39.534 20:24:09 -- common/autotest_common.sh@1370 -- # nb=1310720 00:17:39.534 20:24:09 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:17:39.534 20:24:09 -- common/autotest_common.sh@1374 -- # echo 5120 00:17:39.534 20:24:09 -- ftl/common.sh@63 -- # base_size=5120 00:17:39.534 20:24:09 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:39.534 20:24:09 -- ftl/common.sh@67 -- # clear_lvols 00:17:39.534 20:24:09 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:39.534 20:24:09 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:39.793 20:24:10 -- ftl/common.sh@28 -- # stores=698f1177-274f-4ef8-be65-2f3a4f3f144b 00:17:39.793 20:24:10 -- ftl/common.sh@29 -- # for lvs in $stores 00:17:39.793 20:24:10 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 698f1177-274f-4ef8-be65-2f3a4f3f144b 00:17:40.053 20:24:10 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:40.313 20:24:10 -- ftl/common.sh@68 -- # lvs=15ebf23c-0d9b-47f7-a29e-151ddbdfd59d 00:17:40.313 20:24:10 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 15ebf23c-0d9b-47f7-a29e-151ddbdfd59d 00:17:40.573 20:24:10 -- ftl/bdevperf.sh@23 -- # split_bdev=89200c2a-a1c1-402e-ada0-15835240e805 00:17:40.573 20:24:10 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 89200c2a-a1c1-402e-ada0-15835240e805 00:17:40.573 20:24:10 -- ftl/common.sh@35 -- # local name=nvc0 00:17:40.573 20:24:10 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:40.573 20:24:10 -- ftl/common.sh@37 -- # local base_bdev=89200c2a-a1c1-402e-ada0-15835240e805 00:17:40.573 20:24:10 -- ftl/common.sh@38 -- # local cache_size= 00:17:40.573 20:24:10 -- ftl/common.sh@41 -- # get_bdev_size 89200c2a-a1c1-402e-ada0-15835240e805 00:17:40.573 20:24:10 -- common/autotest_common.sh@1364 -- # local bdev_name=89200c2a-a1c1-402e-ada0-15835240e805 00:17:40.573 20:24:10 -- common/autotest_common.sh@1365 -- # local bdev_info 00:17:40.573 20:24:10 -- common/autotest_common.sh@1366 -- # local bs 00:17:40.573 20:24:10 -- common/autotest_common.sh@1367 -- # local nb 00:17:40.573 20:24:10 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89200c2a-a1c1-402e-ada0-15835240e805 00:17:40.573 20:24:10 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:17:40.573 { 00:17:40.573 "name": "89200c2a-a1c1-402e-ada0-15835240e805", 00:17:40.573 "aliases": [ 00:17:40.573 "lvs/nvme0n1p0" 00:17:40.573 ], 00:17:40.573 "product_name": "Logical Volume", 00:17:40.573 "block_size": 4096, 00:17:40.573 "num_blocks": 26476544, 00:17:40.573 "uuid": "89200c2a-a1c1-402e-ada0-15835240e805", 00:17:40.573 "assigned_rate_limits": { 00:17:40.573 "rw_ios_per_sec": 0, 00:17:40.573 "rw_mbytes_per_sec": 0, 00:17:40.573 "r_mbytes_per_sec": 0, 00:17:40.573 "w_mbytes_per_sec": 0 00:17:40.573 }, 00:17:40.573 "claimed": false, 00:17:40.573 "zoned": false, 00:17:40.573 "supported_io_types": { 00:17:40.573 "read": true, 00:17:40.573 "write": true, 00:17:40.573 "unmap": true, 00:17:40.573 "write_zeroes": true, 00:17:40.573 "flush": false, 00:17:40.573 "reset": true, 00:17:40.573 "compare": false, 00:17:40.573 "compare_and_write": false, 00:17:40.573 "abort": false, 00:17:40.573 "nvme_admin": false, 00:17:40.573 "nvme_io": false 00:17:40.573 }, 00:17:40.573 "driver_specific": { 00:17:40.573 "lvol": { 00:17:40.573 "lvol_store_uuid": "15ebf23c-0d9b-47f7-a29e-151ddbdfd59d", 00:17:40.573 "base_bdev": "nvme0n1", 00:17:40.573 "thin_provision": true, 00:17:40.573 "snapshot": false, 00:17:40.573 "clone": false, 00:17:40.573 "esnap_clone": false 00:17:40.573 } 00:17:40.573 } 00:17:40.573 } 00:17:40.573 ]' 00:17:40.573 20:24:10 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:17:40.832 20:24:10 -- common/autotest_common.sh@1369 -- # bs=4096 00:17:40.832 20:24:10 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:17:40.832 20:24:10 -- common/autotest_common.sh@1370 -- # nb=26476544 00:17:40.832 20:24:10 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:17:40.832 20:24:10 -- common/autotest_common.sh@1374 -- # echo 103424 00:17:40.832 20:24:10 -- ftl/common.sh@41 -- # local base_size=5171 00:17:40.832 20:24:10 -- ftl/common.sh@44 -- # local nvc_bdev 00:17:40.832 20:24:10 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:41.090 20:24:11 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:41.090 20:24:11 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:41.091 20:24:11 -- ftl/common.sh@48 -- # get_bdev_size 89200c2a-a1c1-402e-ada0-15835240e805 00:17:41.091 20:24:11 -- common/autotest_common.sh@1364 -- # local bdev_name=89200c2a-a1c1-402e-ada0-15835240e805 00:17:41.091 20:24:11 -- common/autotest_common.sh@1365 -- # local bdev_info 00:17:41.091 20:24:11 -- common/autotest_common.sh@1366 -- # local bs 00:17:41.091 20:24:11 -- common/autotest_common.sh@1367 -- # local nb 00:17:41.091 20:24:11 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89200c2a-a1c1-402e-ada0-15835240e805 00:17:41.409 20:24:11 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:17:41.409 { 00:17:41.409 "name": "89200c2a-a1c1-402e-ada0-15835240e805", 00:17:41.409 "aliases": [ 00:17:41.409 "lvs/nvme0n1p0" 00:17:41.409 ], 00:17:41.409 "product_name": "Logical Volume", 00:17:41.409 "block_size": 4096, 00:17:41.409 "num_blocks": 26476544, 00:17:41.409 "uuid": "89200c2a-a1c1-402e-ada0-15835240e805", 00:17:41.409 "assigned_rate_limits": { 00:17:41.409 "rw_ios_per_sec": 0, 00:17:41.409 "rw_mbytes_per_sec": 0, 00:17:41.409 "r_mbytes_per_sec": 0, 00:17:41.409 "w_mbytes_per_sec": 0 00:17:41.409 }, 00:17:41.409 "claimed": false, 00:17:41.409 "zoned": false, 00:17:41.409 "supported_io_types": { 00:17:41.409 "read": true, 00:17:41.409 "write": true, 00:17:41.409 "unmap": true, 00:17:41.409 "write_zeroes": true, 00:17:41.409 "flush": false, 00:17:41.409 "reset": true, 00:17:41.409 "compare": false, 00:17:41.409 "compare_and_write": false, 00:17:41.409 "abort": false, 00:17:41.409 "nvme_admin": false, 00:17:41.409 "nvme_io": false 00:17:41.409 }, 00:17:41.409 "driver_specific": { 00:17:41.409 "lvol": { 00:17:41.409 "lvol_store_uuid": "15ebf23c-0d9b-47f7-a29e-151ddbdfd59d", 00:17:41.409 "base_bdev": "nvme0n1", 00:17:41.409 "thin_provision": true, 00:17:41.409 "snapshot": false, 00:17:41.409 "clone": false, 00:17:41.409 "esnap_clone": false 00:17:41.409 } 00:17:41.409 } 00:17:41.409 } 00:17:41.409 ]' 00:17:41.409 20:24:11 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:17:41.409 20:24:11 -- common/autotest_common.sh@1369 -- # bs=4096 00:17:41.409 20:24:11 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:17:41.409 20:24:11 -- common/autotest_common.sh@1370 -- # nb=26476544 00:17:41.409 20:24:11 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:17:41.409 20:24:11 -- common/autotest_common.sh@1374 -- # echo 103424 00:17:41.409 20:24:11 -- ftl/common.sh@48 -- # cache_size=5171 00:17:41.409 20:24:11 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:41.409 20:24:11 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:17:41.409 20:24:11 -- ftl/bdevperf.sh@26 -- # get_bdev_size 89200c2a-a1c1-402e-ada0-15835240e805 00:17:41.409 20:24:11 -- common/autotest_common.sh@1364 -- # local bdev_name=89200c2a-a1c1-402e-ada0-15835240e805 00:17:41.409 20:24:11 -- common/autotest_common.sh@1365 -- # local bdev_info 00:17:41.409 20:24:11 -- common/autotest_common.sh@1366 -- # local bs 00:17:41.409 20:24:11 -- common/autotest_common.sh@1367 -- # local nb 00:17:41.409 20:24:11 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 89200c2a-a1c1-402e-ada0-15835240e805 00:17:41.669 20:24:11 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:17:41.669 { 00:17:41.669 "name": "89200c2a-a1c1-402e-ada0-15835240e805", 00:17:41.669 "aliases": [ 00:17:41.669 "lvs/nvme0n1p0" 00:17:41.669 ], 00:17:41.670 "product_name": "Logical Volume", 00:17:41.670 "block_size": 4096, 00:17:41.670 "num_blocks": 26476544, 00:17:41.670 "uuid": "89200c2a-a1c1-402e-ada0-15835240e805", 00:17:41.670 "assigned_rate_limits": { 00:17:41.670 "rw_ios_per_sec": 0, 00:17:41.670 "rw_mbytes_per_sec": 0, 00:17:41.670 "r_mbytes_per_sec": 0, 00:17:41.670 "w_mbytes_per_sec": 0 00:17:41.670 }, 00:17:41.670 "claimed": false, 00:17:41.670 "zoned": false, 00:17:41.670 "supported_io_types": { 00:17:41.670 "read": true, 00:17:41.670 "write": true, 00:17:41.670 "unmap": true, 00:17:41.670 "write_zeroes": true, 00:17:41.670 "flush": false, 00:17:41.670 "reset": true, 00:17:41.670 "compare": false, 00:17:41.670 "compare_and_write": false, 00:17:41.670 "abort": false, 00:17:41.670 "nvme_admin": false, 00:17:41.670 "nvme_io": false 00:17:41.670 }, 00:17:41.670 "driver_specific": { 00:17:41.670 "lvol": { 00:17:41.670 "lvol_store_uuid": "15ebf23c-0d9b-47f7-a29e-151ddbdfd59d", 00:17:41.670 "base_bdev": "nvme0n1", 00:17:41.670 "thin_provision": true, 00:17:41.670 "snapshot": false, 00:17:41.670 "clone": false, 00:17:41.670 "esnap_clone": false 00:17:41.670 } 00:17:41.670 } 00:17:41.670 } 00:17:41.670 ]' 00:17:41.670 20:24:11 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:17:41.929 20:24:11 -- common/autotest_common.sh@1369 -- # bs=4096 00:17:41.929 20:24:11 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:17:41.929 20:24:11 -- common/autotest_common.sh@1370 -- # nb=26476544 00:17:41.929 20:24:11 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:17:41.929 20:24:11 -- common/autotest_common.sh@1374 -- # echo 103424 00:17:41.929 20:24:11 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:17:41.929 20:24:11 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 89200c2a-a1c1-402e-ada0-15835240e805 -c nvc0n1p0 --l2p_dram_limit 20 00:17:41.929 [2024-04-24 20:24:12.119008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.929 [2024-04-24 20:24:12.119074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:41.929 [2024-04-24 20:24:12.119091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:41.929 [2024-04-24 20:24:12.119104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.929 [2024-04-24 20:24:12.119171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.929 [2024-04-24 20:24:12.119186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:41.929 [2024-04-24 20:24:12.119196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:41.929 [2024-04-24 20:24:12.119209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.929 [2024-04-24 20:24:12.119234] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:41.930 [2024-04-24 20:24:12.120364] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:41.930 [2024-04-24 20:24:12.120387] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.930 [2024-04-24 20:24:12.120407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:41.930 [2024-04-24 20:24:12.120419] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.166 ms 00:17:41.930 [2024-04-24 20:24:12.120447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.930 [2024-04-24 20:24:12.120488] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 03150371-27ad-4792-9b56-5710666ebfd8 00:17:41.930 [2024-04-24 20:24:12.122003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.930 [2024-04-24 20:24:12.122039] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:41.930 [2024-04-24 20:24:12.122055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:41.930 [2024-04-24 20:24:12.122066] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.930 [2024-04-24 20:24:12.129744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.930 [2024-04-24 20:24:12.129790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:41.930 [2024-04-24 20:24:12.129806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.588 ms 00:17:41.930 [2024-04-24 20:24:12.129817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.930 [2024-04-24 20:24:12.129930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.930 [2024-04-24 20:24:12.129947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:41.930 [2024-04-24 20:24:12.129961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:41.930 [2024-04-24 20:24:12.129971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.930 [2024-04-24 20:24:12.130044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.930 [2024-04-24 20:24:12.130056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:41.930 [2024-04-24 20:24:12.130073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:41.930 [2024-04-24 20:24:12.130083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.930 [2024-04-24 20:24:12.130109] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.930 [2024-04-24 20:24:12.136112] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.930 [2024-04-24 20:24:12.136148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:41.930 [2024-04-24 20:24:12.136161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.022 ms 00:17:41.930 [2024-04-24 20:24:12.136174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.930 [2024-04-24 20:24:12.136207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.930 [2024-04-24 20:24:12.136220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:41.930 [2024-04-24 20:24:12.136231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:41.930 [2024-04-24 20:24:12.136243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.930 [2024-04-24 20:24:12.136290] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:41.930 [2024-04-24 20:24:12.136404] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:41.930 [2024-04-24 20:24:12.136418] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:41.930 [2024-04-24 20:24:12.136436] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:41.930 [2024-04-24 20:24:12.136449] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:41.930 [2024-04-24 20:24:12.136463] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:41.930 [2024-04-24 20:24:12.136475] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:41.930 [2024-04-24 20:24:12.136488] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:41.930 [2024-04-24 20:24:12.136500] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:41.930 [2024-04-24 20:24:12.136513] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:41.930 [2024-04-24 20:24:12.136523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.930 [2024-04-24 20:24:12.136538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:41.930 [2024-04-24 20:24:12.136548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:17:41.930 [2024-04-24 20:24:12.136560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.930 [2024-04-24 20:24:12.136617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.930 [2024-04-24 20:24:12.136630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:41.930 [2024-04-24 20:24:12.136641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:41.930 [2024-04-24 20:24:12.136653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.930 [2024-04-24 20:24:12.136722] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:41.930 [2024-04-24 20:24:12.136738] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:41.930 [2024-04-24 20:24:12.136749] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.930 [2024-04-24 20:24:12.136761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.930 [2024-04-24 20:24:12.136772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:41.930 [2024-04-24 20:24:12.136784] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:41.930 [2024-04-24 20:24:12.136793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:41.930 [2024-04-24 20:24:12.136805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:41.930 [2024-04-24 20:24:12.136825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:41.930 [2024-04-24 20:24:12.136837] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.930 [2024-04-24 20:24:12.136847] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:41.930 [2024-04-24 20:24:12.136878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:41.930 [2024-04-24 20:24:12.136888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:41.930 [2024-04-24 20:24:12.136902] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:41.930 [2024-04-24 20:24:12.136915] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:17:41.930 [2024-04-24 20:24:12.136926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.930 [2024-04-24 20:24:12.136935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:41.930 [2024-04-24 20:24:12.136949] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:17:41.930 [2024-04-24 20:24:12.136959] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.930 [2024-04-24 20:24:12.136970] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:41.930 [2024-04-24 20:24:12.136979] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:17:41.930 [2024-04-24 20:24:12.136991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:41.930 [2024-04-24 20:24:12.137000] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:41.930 [2024-04-24 20:24:12.137012] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:41.930 [2024-04-24 20:24:12.137021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:41.930 [2024-04-24 20:24:12.137033] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:41.930 [2024-04-24 20:24:12.137042] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:17:41.930 [2024-04-24 20:24:12.137053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:41.930 [2024-04-24 20:24:12.137062] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:41.930 [2024-04-24 20:24:12.137074] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:41.930 [2024-04-24 20:24:12.137083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:41.930 [2024-04-24 20:24:12.137094] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:41.930 [2024-04-24 20:24:12.137103] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:17:41.930 [2024-04-24 20:24:12.137117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:41.930 [2024-04-24 20:24:12.137126] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:41.930 [2024-04-24 20:24:12.137139] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:41.930 [2024-04-24 20:24:12.137148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.930 [2024-04-24 20:24:12.137159] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:41.930 [2024-04-24 20:24:12.137167] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:17:41.930 [2024-04-24 20:24:12.137179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:41.930 [2024-04-24 20:24:12.137189] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:41.930 [2024-04-24 20:24:12.137201] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:41.930 [2024-04-24 20:24:12.137210] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:41.930 [2024-04-24 20:24:12.137222] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:41.930 [2024-04-24 20:24:12.137232] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:41.930 [2024-04-24 20:24:12.137243] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:41.930 [2024-04-24 20:24:12.137253] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:41.930 [2024-04-24 20:24:12.137265] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:41.930 [2024-04-24 20:24:12.137274] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:41.930 [2024-04-24 20:24:12.137288] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:41.930 [2024-04-24 20:24:12.137299] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:41.930 [2024-04-24 20:24:12.137316] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.930 [2024-04-24 20:24:12.137327] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:41.930 [2024-04-24 20:24:12.137340] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:17:41.930 [2024-04-24 20:24:12.137350] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:17:41.931 [2024-04-24 20:24:12.137363] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:17:41.931 [2024-04-24 20:24:12.137373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:17:41.931 [2024-04-24 20:24:12.137385] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:17:41.931 [2024-04-24 20:24:12.137395] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:17:41.931 [2024-04-24 20:24:12.137407] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:17:41.931 [2024-04-24 20:24:12.137418] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:17:41.931 [2024-04-24 20:24:12.137430] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:17:41.931 [2024-04-24 20:24:12.137440] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:17:41.931 [2024-04-24 20:24:12.137452] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:17:41.931 [2024-04-24 20:24:12.137463] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:17:41.931 [2024-04-24 20:24:12.137479] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:41.931 [2024-04-24 20:24:12.137490] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:41.931 [2024-04-24 20:24:12.137504] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:41.931 [2024-04-24 20:24:12.137514] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:41.931 [2024-04-24 20:24:12.137526] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:41.931 [2024-04-24 20:24:12.137536] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:41.931 [2024-04-24 20:24:12.137549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.931 [2024-04-24 20:24:12.137559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:41.931 [2024-04-24 20:24:12.137571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:17:41.931 [2024-04-24 20:24:12.137581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.190 [2024-04-24 20:24:12.163649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.190 [2024-04-24 20:24:12.163695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:42.190 [2024-04-24 20:24:12.163712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.068 ms 00:17:42.190 [2024-04-24 20:24:12.163724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.190 [2024-04-24 20:24:12.163817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.190 [2024-04-24 20:24:12.163828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:42.190 [2024-04-24 20:24:12.163843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:42.190 [2024-04-24 20:24:12.163869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.190 [2024-04-24 20:24:12.228997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.190 [2024-04-24 20:24:12.229059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:42.190 [2024-04-24 20:24:12.229079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.164 ms 00:17:42.190 [2024-04-24 20:24:12.229090] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.190 [2024-04-24 20:24:12.229145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.190 [2024-04-24 20:24:12.229160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:42.190 [2024-04-24 20:24:12.229174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:42.190 [2024-04-24 20:24:12.229184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.190 [2024-04-24 20:24:12.229686] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.190 [2024-04-24 20:24:12.229700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:42.190 [2024-04-24 20:24:12.229714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:17:42.190 [2024-04-24 20:24:12.229724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.190 [2024-04-24 20:24:12.229839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.190 [2024-04-24 20:24:12.229852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:42.190 [2024-04-24 20:24:12.229892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:42.190 [2024-04-24 20:24:12.229916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.190 [2024-04-24 20:24:12.253278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.190 [2024-04-24 20:24:12.253334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:42.190 [2024-04-24 20:24:12.253353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.375 ms 00:17:42.190 [2024-04-24 20:24:12.253364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.190 [2024-04-24 20:24:12.268410] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:42.190 [2024-04-24 20:24:12.274455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.190 [2024-04-24 20:24:12.274504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:42.190 [2024-04-24 20:24:12.274519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.006 ms 00:17:42.190 [2024-04-24 20:24:12.274532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.190 [2024-04-24 20:24:12.372060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:42.190 [2024-04-24 20:24:12.372118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:42.190 [2024-04-24 20:24:12.372135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.637 ms 00:17:42.190 [2024-04-24 20:24:12.372148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:42.190 [2024-04-24 20:24:12.372195] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:17:42.190 [2024-04-24 20:24:12.372215] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:17:46.383 [2024-04-24 20:24:16.194841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.383 [2024-04-24 20:24:16.194933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:46.383 [2024-04-24 20:24:16.194954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3828.851 ms 00:17:46.383 [2024-04-24 20:24:16.194972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.383 [2024-04-24 20:24:16.195216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.383 [2024-04-24 20:24:16.195235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:46.383 [2024-04-24 20:24:16.195247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:17:46.383 [2024-04-24 20:24:16.195260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.383 [2024-04-24 20:24:16.238618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.383 [2024-04-24 20:24:16.238718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:46.383 [2024-04-24 20:24:16.238748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.341 ms 00:17:46.383 [2024-04-24 20:24:16.238763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.383 [2024-04-24 20:24:16.282156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.383 [2024-04-24 20:24:16.282239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:46.383 [2024-04-24 20:24:16.282259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.352 ms 00:17:46.383 [2024-04-24 20:24:16.282276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.383 [2024-04-24 20:24:16.282753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.383 [2024-04-24 20:24:16.282792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:46.383 [2024-04-24 20:24:16.282821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:17:46.383 [2024-04-24 20:24:16.282835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.383 [2024-04-24 20:24:16.394149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.383 [2024-04-24 20:24:16.394234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:46.383 [2024-04-24 20:24:16.394255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.400 ms 00:17:46.383 [2024-04-24 20:24:16.394270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.383 [2024-04-24 20:24:16.439010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.383 [2024-04-24 20:24:16.439099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:46.383 [2024-04-24 20:24:16.439118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.729 ms 00:17:46.383 [2024-04-24 20:24:16.439132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.383 [2024-04-24 20:24:16.441442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.383 [2024-04-24 20:24:16.441483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:46.383 [2024-04-24 20:24:16.441497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.237 ms 00:17:46.383 [2024-04-24 20:24:16.441516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.383 [2024-04-24 20:24:16.484536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.383 [2024-04-24 20:24:16.484619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:46.383 [2024-04-24 20:24:16.484636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.000 ms 00:17:46.383 [2024-04-24 20:24:16.484649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.383 [2024-04-24 20:24:16.484728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.383 [2024-04-24 20:24:16.484747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:46.383 [2024-04-24 20:24:16.484758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:46.383 [2024-04-24 20:24:16.484770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.383 [2024-04-24 20:24:16.484908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.383 [2024-04-24 20:24:16.484924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:46.383 [2024-04-24 20:24:16.484935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:46.383 [2024-04-24 20:24:16.484948] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.383 [2024-04-24 20:24:16.486092] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4373.689 ms, result 0 00:17:46.383 { 00:17:46.383 "name": "ftl0", 00:17:46.383 "uuid": "03150371-27ad-4792-9b56-5710666ebfd8" 00:17:46.383 } 00:17:46.383 20:24:16 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:17:46.383 20:24:16 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:46.383 20:24:16 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:17:46.641 20:24:16 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:46.641 [2024-04-24 20:24:16.794323] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:46.641 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:46.641 Zero copy mechanism will not be used. 00:17:46.641 Running I/O for 4 seconds... 00:17:50.827 00:17:50.827 Latency(us) 00:17:50.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.827 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:50.827 ftl0 : 4.00 1754.41 116.50 0.00 0.00 597.38 213.85 1855.54 00:17:50.827 =================================================================================================================== 00:17:50.827 Total : 1754.41 116.50 0.00 0.00 597.38 213.85 1855.54 00:17:50.827 0 00:17:50.827 [2024-04-24 20:24:20.799281] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:50.827 20:24:20 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:50.827 [2024-04-24 20:24:20.897146] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:50.827 Running I/O for 4 seconds... 00:17:55.013 00:17:55.013 Latency(us) 00:17:55.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.013 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.013 ftl0 : 4.01 10401.76 40.63 0.00 0.00 12281.14 235.23 35373.65 00:17:55.013 =================================================================================================================== 00:17:55.013 Total : 10401.76 40.63 0.00 0.00 12281.14 0.00 35373.65 00:17:55.013 [2024-04-24 20:24:24.915716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:55.013 0 00:17:55.013 20:24:24 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:55.013 [2024-04-24 20:24:25.068633] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:55.013 Running I/O for 4 seconds... 00:17:59.215 00:17:59.215 Latency(us) 00:17:59.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.215 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:59.215 Verification LBA range: start 0x0 length 0x1400000 00:17:59.215 ftl0 : 4.01 8365.52 32.68 0.00 0.00 15252.95 263.20 29056.93 00:17:59.215 =================================================================================================================== 00:17:59.215 Total : 8365.52 32.68 0.00 0.00 15252.95 0.00 29056.93 00:17:59.215 [2024-04-24 20:24:29.092214] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:59.215 0 00:17:59.215 20:24:29 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:59.215 [2024-04-24 20:24:29.271284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.215 [2024-04-24 20:24:29.271348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:59.215 [2024-04-24 20:24:29.271365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:59.215 [2024-04-24 20:24:29.271379] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.215 [2024-04-24 20:24:29.271405] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:59.215 [2024-04-24 20:24:29.275054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.215 [2024-04-24 20:24:29.275096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:59.215 [2024-04-24 20:24:29.275112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.630 ms 00:17:59.215 [2024-04-24 20:24:29.275123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.215 [2024-04-24 20:24:29.276849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.215 [2024-04-24 20:24:29.276897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:59.215 [2024-04-24 20:24:29.276915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.682 ms 00:17:59.215 [2024-04-24 20:24:29.276929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.476 [2024-04-24 20:24:29.481942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.476 [2024-04-24 20:24:29.482014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:59.476 [2024-04-24 20:24:29.482039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 205.306 ms 00:17:59.476 [2024-04-24 20:24:29.482051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.476 [2024-04-24 20:24:29.487448] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.476 [2024-04-24 20:24:29.487492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:59.476 [2024-04-24 20:24:29.487509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.346 ms 00:17:59.476 [2024-04-24 20:24:29.487519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.476 [2024-04-24 20:24:29.525304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.476 [2024-04-24 20:24:29.525349] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:59.476 [2024-04-24 20:24:29.525367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.755 ms 00:17:59.476 [2024-04-24 20:24:29.525377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.476 [2024-04-24 20:24:29.548066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.476 [2024-04-24 20:24:29.548112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:59.476 [2024-04-24 20:24:29.548130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.674 ms 00:17:59.476 [2024-04-24 20:24:29.548157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.476 [2024-04-24 20:24:29.548314] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.476 [2024-04-24 20:24:29.548328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:59.476 [2024-04-24 20:24:29.548342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:17:59.476 [2024-04-24 20:24:29.548352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.476 [2024-04-24 20:24:29.586062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.476 [2024-04-24 20:24:29.586117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:59.476 [2024-04-24 20:24:29.586137] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.744 ms 00:17:59.476 [2024-04-24 20:24:29.586148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.476 [2024-04-24 20:24:29.624579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.476 [2024-04-24 20:24:29.624633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:59.476 [2024-04-24 20:24:29.624652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.435 ms 00:17:59.476 [2024-04-24 20:24:29.624662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.476 [2024-04-24 20:24:29.663358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.476 [2024-04-24 20:24:29.663413] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:59.476 [2024-04-24 20:24:29.663432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.699 ms 00:17:59.476 [2024-04-24 20:24:29.663443] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.476 [2024-04-24 20:24:29.702423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.476 [2024-04-24 20:24:29.702491] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:59.476 [2024-04-24 20:24:29.702510] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.926 ms 00:17:59.476 [2024-04-24 20:24:29.702520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.476 [2024-04-24 20:24:29.702572] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:59.476 [2024-04-24 20:24:29.702590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:59.476 [2024-04-24 20:24:29.702938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.702960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.702987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.703972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:59.477 [2024-04-24 20:24:29.704688] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:59.477 [2024-04-24 20:24:29.704711] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03150371-27ad-4792-9b56-5710666ebfd8 00:17:59.477 [2024-04-24 20:24:29.704732] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:59.477 [2024-04-24 20:24:29.704757] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:59.477 [2024-04-24 20:24:29.704774] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:59.477 [2024-04-24 20:24:29.704796] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:59.477 [2024-04-24 20:24:29.704814] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:59.477 [2024-04-24 20:24:29.704835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:59.477 [2024-04-24 20:24:29.704848] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:59.477 [2024-04-24 20:24:29.704882] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:59.477 [2024-04-24 20:24:29.704900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:59.477 [2024-04-24 20:24:29.704923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.478 [2024-04-24 20:24:29.704941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:59.478 [2024-04-24 20:24:29.704963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.355 ms 00:17:59.478 [2024-04-24 20:24:29.704980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.723437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.738 [2024-04-24 20:24:29.723488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:59.738 [2024-04-24 20:24:29.723507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.392 ms 00:17:59.738 [2024-04-24 20:24:29.723522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.723844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.738 [2024-04-24 20:24:29.723899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:59.738 [2024-04-24 20:24:29.723937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:17:59.738 [2024-04-24 20:24:29.723955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.782295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.738 [2024-04-24 20:24:29.782356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.738 [2024-04-24 20:24:29.782395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.738 [2024-04-24 20:24:29.782407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.782495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.738 [2024-04-24 20:24:29.782506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.738 [2024-04-24 20:24:29.782519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.738 [2024-04-24 20:24:29.782529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.782633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.738 [2024-04-24 20:24:29.782653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.738 [2024-04-24 20:24:29.782672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.738 [2024-04-24 20:24:29.782690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.782729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.738 [2024-04-24 20:24:29.782753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.738 [2024-04-24 20:24:29.782767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.738 [2024-04-24 20:24:29.782777] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.900983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.738 [2024-04-24 20:24:29.901041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.738 [2024-04-24 20:24:29.901061] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.738 [2024-04-24 20:24:29.901074] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.948130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.738 [2024-04-24 20:24:29.948182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.738 [2024-04-24 20:24:29.948200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.738 [2024-04-24 20:24:29.948210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.948304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.738 [2024-04-24 20:24:29.948316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:59.738 [2024-04-24 20:24:29.948330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.738 [2024-04-24 20:24:29.948340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.948391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.738 [2024-04-24 20:24:29.948403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:59.738 [2024-04-24 20:24:29.948415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.738 [2024-04-24 20:24:29.948425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.948561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.738 [2024-04-24 20:24:29.948578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:59.738 [2024-04-24 20:24:29.948593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.738 [2024-04-24 20:24:29.948609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.948660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.738 [2024-04-24 20:24:29.948688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:59.738 [2024-04-24 20:24:29.948712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.738 [2024-04-24 20:24:29.948730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.948790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.738 [2024-04-24 20:24:29.948806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:59.738 [2024-04-24 20:24:29.948818] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.738 [2024-04-24 20:24:29.948828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.948924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.738 [2024-04-24 20:24:29.948947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:59.738 [2024-04-24 20:24:29.948969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.738 [2024-04-24 20:24:29.948985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.738 [2024-04-24 20:24:29.949150] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 678.908 ms, result 0 00:17:59.738 true 00:17:59.997 20:24:29 -- ftl/bdevperf.sh@37 -- # killprocess 78010 00:17:59.997 20:24:29 -- common/autotest_common.sh@936 -- # '[' -z 78010 ']' 00:17:59.997 20:24:29 -- common/autotest_common.sh@940 -- # kill -0 78010 00:17:59.997 20:24:29 -- common/autotest_common.sh@941 -- # uname 00:17:59.997 20:24:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:59.997 20:24:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78010 00:17:59.997 killing process with pid 78010 00:17:59.997 Received shutdown signal, test time was about 4.000000 seconds 00:17:59.997 00:17:59.997 Latency(us) 00:17:59.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:59.997 =================================================================================================================== 00:17:59.997 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:59.997 20:24:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:59.997 20:24:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:59.997 20:24:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78010' 00:17:59.997 20:24:30 -- common/autotest_common.sh@955 -- # kill 78010 00:17:59.997 20:24:30 -- common/autotest_common.sh@960 -- # wait 78010 00:18:04.194 20:24:33 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:18:04.194 20:24:33 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:18:04.194 20:24:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:04.194 20:24:33 -- common/autotest_common.sh@10 -- # set +x 00:18:04.194 20:24:33 -- ftl/bdevperf.sh@41 -- # remove_shm 00:18:04.194 Remove shared memory files 00:18:04.194 20:24:33 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:04.194 20:24:33 -- ftl/common.sh@205 -- # rm -f rm -f 00:18:04.194 20:24:33 -- ftl/common.sh@206 -- # rm -f rm -f 00:18:04.194 20:24:33 -- ftl/common.sh@207 -- # rm -f rm -f 00:18:04.194 20:24:33 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:04.194 20:24:33 -- ftl/common.sh@209 -- # rm -f rm -f 00:18:04.194 00:18:04.194 real 0m25.922s 00:18:04.194 user 0m28.397s 00:18:04.194 sys 0m1.279s 00:18:04.194 20:24:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:04.194 20:24:33 -- common/autotest_common.sh@10 -- # set +x 00:18:04.194 ************************************ 00:18:04.194 END TEST ftl_bdevperf 00:18:04.194 ************************************ 00:18:04.195 20:24:34 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:04.195 20:24:34 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:04.195 20:24:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:04.195 20:24:34 -- common/autotest_common.sh@10 -- # set +x 00:18:04.195 ************************************ 00:18:04.195 START TEST ftl_trim 00:18:04.195 ************************************ 00:18:04.195 20:24:34 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:04.195 * Looking for test storage... 00:18:04.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:04.195 20:24:34 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:04.195 20:24:34 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:04.195 20:24:34 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:04.195 20:24:34 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:04.195 20:24:34 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:04.195 20:24:34 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:04.195 20:24:34 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.195 20:24:34 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:04.195 20:24:34 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:04.195 20:24:34 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:04.195 20:24:34 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:04.195 20:24:34 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:04.195 20:24:34 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:04.195 20:24:34 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:04.195 20:24:34 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:04.195 20:24:34 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:04.195 20:24:34 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:04.195 20:24:34 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:04.195 20:24:34 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:04.195 20:24:34 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:04.195 20:24:34 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:04.195 20:24:34 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:04.195 20:24:34 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:04.195 20:24:34 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:04.195 20:24:34 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:04.195 20:24:34 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:04.195 20:24:34 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:04.195 20:24:34 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:04.195 20:24:34 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:04.195 20:24:34 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:04.195 20:24:34 -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:04.195 20:24:34 -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:04.195 20:24:34 -- ftl/trim.sh@25 -- # timeout=240 00:18:04.195 20:24:34 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:04.195 20:24:34 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:04.195 20:24:34 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:04.195 20:24:34 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:04.195 20:24:34 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:04.195 20:24:34 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:04.195 20:24:34 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:04.195 20:24:34 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:04.195 20:24:34 -- ftl/trim.sh@40 -- # svcpid=78391 00:18:04.195 20:24:34 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:04.195 20:24:34 -- ftl/trim.sh@41 -- # waitforlisten 78391 00:18:04.195 20:24:34 -- common/autotest_common.sh@817 -- # '[' -z 78391 ']' 00:18:04.195 20:24:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.195 20:24:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:04.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.195 20:24:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.195 20:24:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:04.195 20:24:34 -- common/autotest_common.sh@10 -- # set +x 00:18:04.195 [2024-04-24 20:24:34.379628] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:18:04.195 [2024-04-24 20:24:34.380201] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78391 ] 00:18:04.454 [2024-04-24 20:24:34.551179] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:04.710 [2024-04-24 20:24:34.810610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.710 [2024-04-24 20:24:34.810826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.710 [2024-04-24 20:24:34.810890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.646 20:24:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:05.646 20:24:35 -- common/autotest_common.sh@850 -- # return 0 00:18:05.646 20:24:35 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:05.646 20:24:35 -- ftl/common.sh@54 -- # local name=nvme0 00:18:05.646 20:24:35 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:05.646 20:24:35 -- ftl/common.sh@56 -- # local size=103424 00:18:05.646 20:24:35 -- ftl/common.sh@59 -- # local base_bdev 00:18:05.646 20:24:35 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:06.212 20:24:36 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:06.212 20:24:36 -- ftl/common.sh@62 -- # local base_size 00:18:06.212 20:24:36 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:06.212 20:24:36 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:18:06.212 20:24:36 -- common/autotest_common.sh@1365 -- # local bdev_info 00:18:06.212 20:24:36 -- common/autotest_common.sh@1366 -- # local bs 00:18:06.212 20:24:36 -- common/autotest_common.sh@1367 -- # local nb 00:18:06.212 20:24:36 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:06.471 20:24:36 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:18:06.471 { 00:18:06.471 "name": "nvme0n1", 00:18:06.471 "aliases": [ 00:18:06.471 "4f9e488d-2dd2-455b-88f5-c262d99bc3cc" 00:18:06.471 ], 00:18:06.471 "product_name": "NVMe disk", 00:18:06.471 "block_size": 4096, 00:18:06.471 "num_blocks": 1310720, 00:18:06.471 "uuid": "4f9e488d-2dd2-455b-88f5-c262d99bc3cc", 00:18:06.471 "assigned_rate_limits": { 00:18:06.471 "rw_ios_per_sec": 0, 00:18:06.471 "rw_mbytes_per_sec": 0, 00:18:06.471 "r_mbytes_per_sec": 0, 00:18:06.471 "w_mbytes_per_sec": 0 00:18:06.471 }, 00:18:06.471 "claimed": true, 00:18:06.471 "claim_type": "read_many_write_one", 00:18:06.471 "zoned": false, 00:18:06.471 "supported_io_types": { 00:18:06.471 "read": true, 00:18:06.471 "write": true, 00:18:06.471 "unmap": true, 00:18:06.471 "write_zeroes": true, 00:18:06.471 "flush": true, 00:18:06.471 "reset": true, 00:18:06.471 "compare": true, 00:18:06.471 "compare_and_write": false, 00:18:06.471 "abort": true, 00:18:06.471 "nvme_admin": true, 00:18:06.471 "nvme_io": true 00:18:06.471 }, 00:18:06.471 "driver_specific": { 00:18:06.471 "nvme": [ 00:18:06.471 { 00:18:06.471 "pci_address": "0000:00:11.0", 00:18:06.471 "trid": { 00:18:06.471 "trtype": "PCIe", 00:18:06.471 "traddr": "0000:00:11.0" 00:18:06.471 }, 00:18:06.472 "ctrlr_data": { 00:18:06.472 "cntlid": 0, 00:18:06.472 "vendor_id": "0x1b36", 00:18:06.472 "model_number": "QEMU NVMe Ctrl", 00:18:06.472 "serial_number": "12341", 00:18:06.472 "firmware_revision": "8.0.0", 00:18:06.472 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:06.472 "oacs": { 00:18:06.472 "security": 0, 00:18:06.472 "format": 1, 00:18:06.472 "firmware": 0, 00:18:06.472 "ns_manage": 1 00:18:06.472 }, 00:18:06.472 "multi_ctrlr": false, 00:18:06.472 "ana_reporting": false 00:18:06.472 }, 00:18:06.472 "vs": { 00:18:06.472 "nvme_version": "1.4" 00:18:06.472 }, 00:18:06.472 "ns_data": { 00:18:06.472 "id": 1, 00:18:06.472 "can_share": false 00:18:06.472 } 00:18:06.472 } 00:18:06.472 ], 00:18:06.472 "mp_policy": "active_passive" 00:18:06.472 } 00:18:06.472 } 00:18:06.472 ]' 00:18:06.472 20:24:36 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:18:06.472 20:24:36 -- common/autotest_common.sh@1369 -- # bs=4096 00:18:06.472 20:24:36 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:18:06.472 20:24:36 -- common/autotest_common.sh@1370 -- # nb=1310720 00:18:06.472 20:24:36 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:18:06.472 20:24:36 -- common/autotest_common.sh@1374 -- # echo 5120 00:18:06.472 20:24:36 -- ftl/common.sh@63 -- # base_size=5120 00:18:06.472 20:24:36 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:06.472 20:24:36 -- ftl/common.sh@67 -- # clear_lvols 00:18:06.472 20:24:36 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:06.472 20:24:36 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:06.730 20:24:36 -- ftl/common.sh@28 -- # stores=15ebf23c-0d9b-47f7-a29e-151ddbdfd59d 00:18:06.730 20:24:36 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:06.730 20:24:36 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 15ebf23c-0d9b-47f7-a29e-151ddbdfd59d 00:18:06.988 20:24:37 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:07.246 20:24:37 -- ftl/common.sh@68 -- # lvs=5758f976-9653-4916-99dd-b7bf5944f1ab 00:18:07.246 20:24:37 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5758f976-9653-4916-99dd-b7bf5944f1ab 00:18:07.505 20:24:37 -- ftl/trim.sh@43 -- # split_bdev=b5f20aea-3274-4f54-a96d-f226ea488112 00:18:07.505 20:24:37 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b5f20aea-3274-4f54-a96d-f226ea488112 00:18:07.505 20:24:37 -- ftl/common.sh@35 -- # local name=nvc0 00:18:07.505 20:24:37 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:07.505 20:24:37 -- ftl/common.sh@37 -- # local base_bdev=b5f20aea-3274-4f54-a96d-f226ea488112 00:18:07.505 20:24:37 -- ftl/common.sh@38 -- # local cache_size= 00:18:07.505 20:24:37 -- ftl/common.sh@41 -- # get_bdev_size b5f20aea-3274-4f54-a96d-f226ea488112 00:18:07.505 20:24:37 -- common/autotest_common.sh@1364 -- # local bdev_name=b5f20aea-3274-4f54-a96d-f226ea488112 00:18:07.505 20:24:37 -- common/autotest_common.sh@1365 -- # local bdev_info 00:18:07.505 20:24:37 -- common/autotest_common.sh@1366 -- # local bs 00:18:07.505 20:24:37 -- common/autotest_common.sh@1367 -- # local nb 00:18:07.505 20:24:37 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b5f20aea-3274-4f54-a96d-f226ea488112 00:18:07.763 20:24:37 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:18:07.763 { 00:18:07.763 "name": "b5f20aea-3274-4f54-a96d-f226ea488112", 00:18:07.763 "aliases": [ 00:18:07.763 "lvs/nvme0n1p0" 00:18:07.763 ], 00:18:07.763 "product_name": "Logical Volume", 00:18:07.763 "block_size": 4096, 00:18:07.763 "num_blocks": 26476544, 00:18:07.763 "uuid": "b5f20aea-3274-4f54-a96d-f226ea488112", 00:18:07.763 "assigned_rate_limits": { 00:18:07.763 "rw_ios_per_sec": 0, 00:18:07.763 "rw_mbytes_per_sec": 0, 00:18:07.763 "r_mbytes_per_sec": 0, 00:18:07.763 "w_mbytes_per_sec": 0 00:18:07.763 }, 00:18:07.763 "claimed": false, 00:18:07.763 "zoned": false, 00:18:07.763 "supported_io_types": { 00:18:07.763 "read": true, 00:18:07.763 "write": true, 00:18:07.763 "unmap": true, 00:18:07.763 "write_zeroes": true, 00:18:07.763 "flush": false, 00:18:07.763 "reset": true, 00:18:07.763 "compare": false, 00:18:07.763 "compare_and_write": false, 00:18:07.763 "abort": false, 00:18:07.763 "nvme_admin": false, 00:18:07.763 "nvme_io": false 00:18:07.763 }, 00:18:07.763 "driver_specific": { 00:18:07.763 "lvol": { 00:18:07.763 "lvol_store_uuid": "5758f976-9653-4916-99dd-b7bf5944f1ab", 00:18:07.763 "base_bdev": "nvme0n1", 00:18:07.763 "thin_provision": true, 00:18:07.763 "snapshot": false, 00:18:07.763 "clone": false, 00:18:07.763 "esnap_clone": false 00:18:07.763 } 00:18:07.763 } 00:18:07.763 } 00:18:07.763 ]' 00:18:07.763 20:24:37 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:18:08.021 20:24:37 -- common/autotest_common.sh@1369 -- # bs=4096 00:18:08.021 20:24:37 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:18:08.021 20:24:38 -- common/autotest_common.sh@1370 -- # nb=26476544 00:18:08.021 20:24:38 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:18:08.021 20:24:38 -- common/autotest_common.sh@1374 -- # echo 103424 00:18:08.021 20:24:38 -- ftl/common.sh@41 -- # local base_size=5171 00:18:08.021 20:24:38 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:08.021 20:24:38 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:08.279 20:24:38 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:08.279 20:24:38 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:08.279 20:24:38 -- ftl/common.sh@48 -- # get_bdev_size b5f20aea-3274-4f54-a96d-f226ea488112 00:18:08.279 20:24:38 -- common/autotest_common.sh@1364 -- # local bdev_name=b5f20aea-3274-4f54-a96d-f226ea488112 00:18:08.279 20:24:38 -- common/autotest_common.sh@1365 -- # local bdev_info 00:18:08.279 20:24:38 -- common/autotest_common.sh@1366 -- # local bs 00:18:08.279 20:24:38 -- common/autotest_common.sh@1367 -- # local nb 00:18:08.279 20:24:38 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b5f20aea-3274-4f54-a96d-f226ea488112 00:18:08.536 20:24:38 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:18:08.536 { 00:18:08.536 "name": "b5f20aea-3274-4f54-a96d-f226ea488112", 00:18:08.536 "aliases": [ 00:18:08.536 "lvs/nvme0n1p0" 00:18:08.536 ], 00:18:08.536 "product_name": "Logical Volume", 00:18:08.536 "block_size": 4096, 00:18:08.536 "num_blocks": 26476544, 00:18:08.536 "uuid": "b5f20aea-3274-4f54-a96d-f226ea488112", 00:18:08.536 "assigned_rate_limits": { 00:18:08.536 "rw_ios_per_sec": 0, 00:18:08.536 "rw_mbytes_per_sec": 0, 00:18:08.536 "r_mbytes_per_sec": 0, 00:18:08.536 "w_mbytes_per_sec": 0 00:18:08.536 }, 00:18:08.536 "claimed": false, 00:18:08.536 "zoned": false, 00:18:08.536 "supported_io_types": { 00:18:08.536 "read": true, 00:18:08.536 "write": true, 00:18:08.536 "unmap": true, 00:18:08.536 "write_zeroes": true, 00:18:08.536 "flush": false, 00:18:08.536 "reset": true, 00:18:08.536 "compare": false, 00:18:08.536 "compare_and_write": false, 00:18:08.536 "abort": false, 00:18:08.536 "nvme_admin": false, 00:18:08.536 "nvme_io": false 00:18:08.536 }, 00:18:08.536 "driver_specific": { 00:18:08.536 "lvol": { 00:18:08.536 "lvol_store_uuid": "5758f976-9653-4916-99dd-b7bf5944f1ab", 00:18:08.536 "base_bdev": "nvme0n1", 00:18:08.536 "thin_provision": true, 00:18:08.536 "snapshot": false, 00:18:08.536 "clone": false, 00:18:08.536 "esnap_clone": false 00:18:08.536 } 00:18:08.536 } 00:18:08.536 } 00:18:08.536 ]' 00:18:08.536 20:24:38 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:18:08.536 20:24:38 -- common/autotest_common.sh@1369 -- # bs=4096 00:18:08.536 20:24:38 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:18:08.536 20:24:38 -- common/autotest_common.sh@1370 -- # nb=26476544 00:18:08.536 20:24:38 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:18:08.536 20:24:38 -- common/autotest_common.sh@1374 -- # echo 103424 00:18:08.536 20:24:38 -- ftl/common.sh@48 -- # cache_size=5171 00:18:08.536 20:24:38 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:08.794 20:24:38 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:08.794 20:24:38 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:08.794 20:24:38 -- ftl/trim.sh@47 -- # get_bdev_size b5f20aea-3274-4f54-a96d-f226ea488112 00:18:08.794 20:24:38 -- common/autotest_common.sh@1364 -- # local bdev_name=b5f20aea-3274-4f54-a96d-f226ea488112 00:18:08.794 20:24:38 -- common/autotest_common.sh@1365 -- # local bdev_info 00:18:08.794 20:24:38 -- common/autotest_common.sh@1366 -- # local bs 00:18:08.794 20:24:38 -- common/autotest_common.sh@1367 -- # local nb 00:18:08.794 20:24:38 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b5f20aea-3274-4f54-a96d-f226ea488112 00:18:09.052 20:24:39 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:18:09.052 { 00:18:09.052 "name": "b5f20aea-3274-4f54-a96d-f226ea488112", 00:18:09.052 "aliases": [ 00:18:09.052 "lvs/nvme0n1p0" 00:18:09.052 ], 00:18:09.052 "product_name": "Logical Volume", 00:18:09.052 "block_size": 4096, 00:18:09.052 "num_blocks": 26476544, 00:18:09.052 "uuid": "b5f20aea-3274-4f54-a96d-f226ea488112", 00:18:09.052 "assigned_rate_limits": { 00:18:09.052 "rw_ios_per_sec": 0, 00:18:09.052 "rw_mbytes_per_sec": 0, 00:18:09.052 "r_mbytes_per_sec": 0, 00:18:09.052 "w_mbytes_per_sec": 0 00:18:09.052 }, 00:18:09.052 "claimed": false, 00:18:09.052 "zoned": false, 00:18:09.052 "supported_io_types": { 00:18:09.052 "read": true, 00:18:09.052 "write": true, 00:18:09.052 "unmap": true, 00:18:09.052 "write_zeroes": true, 00:18:09.052 "flush": false, 00:18:09.052 "reset": true, 00:18:09.052 "compare": false, 00:18:09.052 "compare_and_write": false, 00:18:09.052 "abort": false, 00:18:09.052 "nvme_admin": false, 00:18:09.052 "nvme_io": false 00:18:09.052 }, 00:18:09.052 "driver_specific": { 00:18:09.052 "lvol": { 00:18:09.052 "lvol_store_uuid": "5758f976-9653-4916-99dd-b7bf5944f1ab", 00:18:09.052 "base_bdev": "nvme0n1", 00:18:09.052 "thin_provision": true, 00:18:09.052 "snapshot": false, 00:18:09.052 "clone": false, 00:18:09.052 "esnap_clone": false 00:18:09.052 } 00:18:09.052 } 00:18:09.052 } 00:18:09.052 ]' 00:18:09.052 20:24:39 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:18:09.310 20:24:39 -- common/autotest_common.sh@1369 -- # bs=4096 00:18:09.310 20:24:39 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:18:09.310 20:24:39 -- common/autotest_common.sh@1370 -- # nb=26476544 00:18:09.310 20:24:39 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:18:09.310 20:24:39 -- common/autotest_common.sh@1374 -- # echo 103424 00:18:09.310 20:24:39 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:09.310 20:24:39 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b5f20aea-3274-4f54-a96d-f226ea488112 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:09.570 [2024-04-24 20:24:39.575942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.570 [2024-04-24 20:24:39.575994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:09.570 [2024-04-24 20:24:39.576016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:09.570 [2024-04-24 20:24:39.576026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.570 [2024-04-24 20:24:39.580030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.570 [2024-04-24 20:24:39.580075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:09.570 [2024-04-24 20:24:39.580094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.971 ms 00:18:09.570 [2024-04-24 20:24:39.580105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.570 [2024-04-24 20:24:39.580264] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:09.570 [2024-04-24 20:24:39.581642] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:09.570 [2024-04-24 20:24:39.581681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.570 [2024-04-24 20:24:39.581693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:09.570 [2024-04-24 20:24:39.581711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.435 ms 00:18:09.570 [2024-04-24 20:24:39.581724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.570 [2024-04-24 20:24:39.581954] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 59cba280-e672-4cc5-80b9-6f15d46f3b14 00:18:09.570 [2024-04-24 20:24:39.583452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.570 [2024-04-24 20:24:39.583487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:09.570 [2024-04-24 20:24:39.583500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:09.570 [2024-04-24 20:24:39.583513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.570 [2024-04-24 20:24:39.591249] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.570 [2024-04-24 20:24:39.591300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:09.570 [2024-04-24 20:24:39.591315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.658 ms 00:18:09.570 [2024-04-24 20:24:39.591329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.570 [2024-04-24 20:24:39.591672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.571 [2024-04-24 20:24:39.591694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:09.571 [2024-04-24 20:24:39.591706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:18:09.571 [2024-04-24 20:24:39.591724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.571 [2024-04-24 20:24:39.591773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.571 [2024-04-24 20:24:39.591791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:09.571 [2024-04-24 20:24:39.591802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:09.571 [2024-04-24 20:24:39.591815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.571 [2024-04-24 20:24:39.591865] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:09.571 [2024-04-24 20:24:39.598176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.571 [2024-04-24 20:24:39.598218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:09.571 [2024-04-24 20:24:39.598235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.334 ms 00:18:09.571 [2024-04-24 20:24:39.598246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.571 [2024-04-24 20:24:39.598351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.571 [2024-04-24 20:24:39.598364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:09.571 [2024-04-24 20:24:39.598378] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:09.571 [2024-04-24 20:24:39.598413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.571 [2024-04-24 20:24:39.598455] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:09.571 [2024-04-24 20:24:39.598574] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:09.571 [2024-04-24 20:24:39.598602] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:09.571 [2024-04-24 20:24:39.598618] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:09.571 [2024-04-24 20:24:39.598642] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:09.571 [2024-04-24 20:24:39.598655] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:09.571 [2024-04-24 20:24:39.598669] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:09.571 [2024-04-24 20:24:39.598680] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:09.571 [2024-04-24 20:24:39.598692] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:09.571 [2024-04-24 20:24:39.598702] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:09.571 [2024-04-24 20:24:39.598733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.571 [2024-04-24 20:24:39.598754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:09.571 [2024-04-24 20:24:39.598768] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:18:09.571 [2024-04-24 20:24:39.598795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.571 [2024-04-24 20:24:39.598918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.571 [2024-04-24 20:24:39.598945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:09.571 [2024-04-24 20:24:39.598967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:18:09.571 [2024-04-24 20:24:39.598985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.571 [2024-04-24 20:24:39.599112] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:09.571 [2024-04-24 20:24:39.599131] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:09.571 [2024-04-24 20:24:39.599148] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:09.571 [2024-04-24 20:24:39.599159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.571 [2024-04-24 20:24:39.599173] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:09.571 [2024-04-24 20:24:39.599183] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:09.571 [2024-04-24 20:24:39.599196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:09.571 [2024-04-24 20:24:39.599206] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:09.571 [2024-04-24 20:24:39.599220] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:09.571 [2024-04-24 20:24:39.599230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:09.571 [2024-04-24 20:24:39.599242] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:09.571 [2024-04-24 20:24:39.599252] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:09.571 [2024-04-24 20:24:39.599264] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:09.571 [2024-04-24 20:24:39.599273] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:09.571 [2024-04-24 20:24:39.599287] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:09.571 [2024-04-24 20:24:39.599297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.571 [2024-04-24 20:24:39.599309] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:09.571 [2024-04-24 20:24:39.599319] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:09.571 [2024-04-24 20:24:39.599334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.571 [2024-04-24 20:24:39.599343] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:09.571 [2024-04-24 20:24:39.599355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:09.571 [2024-04-24 20:24:39.599365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:09.571 [2024-04-24 20:24:39.599377] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:09.571 [2024-04-24 20:24:39.599386] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:09.571 [2024-04-24 20:24:39.599404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:09.571 [2024-04-24 20:24:39.599420] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:09.571 [2024-04-24 20:24:39.599442] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:09.571 [2024-04-24 20:24:39.599454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:09.571 [2024-04-24 20:24:39.599466] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:09.571 [2024-04-24 20:24:39.599476] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:09.571 [2024-04-24 20:24:39.599488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:09.571 [2024-04-24 20:24:39.599498] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:09.571 [2024-04-24 20:24:39.599510] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:09.571 [2024-04-24 20:24:39.599519] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:09.571 [2024-04-24 20:24:39.599534] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:09.571 [2024-04-24 20:24:39.599543] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:09.571 [2024-04-24 20:24:39.599555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:09.571 [2024-04-24 20:24:39.599565] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:09.571 [2024-04-24 20:24:39.599577] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:09.571 [2024-04-24 20:24:39.599587] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:09.571 [2024-04-24 20:24:39.599601] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:09.571 [2024-04-24 20:24:39.599612] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:09.571 [2024-04-24 20:24:39.599628] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:09.571 [2024-04-24 20:24:39.599638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:09.571 [2024-04-24 20:24:39.599652] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:09.571 [2024-04-24 20:24:39.599663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:09.571 [2024-04-24 20:24:39.599675] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:09.571 [2024-04-24 20:24:39.599685] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:09.571 [2024-04-24 20:24:39.599697] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:09.571 [2024-04-24 20:24:39.599707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:09.571 [2024-04-24 20:24:39.599723] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:09.571 [2024-04-24 20:24:39.599738] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:09.571 [2024-04-24 20:24:39.599753] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:09.571 [2024-04-24 20:24:39.599764] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:09.571 [2024-04-24 20:24:39.599778] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:09.571 [2024-04-24 20:24:39.599789] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:09.571 [2024-04-24 20:24:39.599803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:09.571 [2024-04-24 20:24:39.599814] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:09.571 [2024-04-24 20:24:39.599828] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:09.571 [2024-04-24 20:24:39.599838] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:09.571 [2024-04-24 20:24:39.599852] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:09.571 [2024-04-24 20:24:39.599875] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:09.571 [2024-04-24 20:24:39.599888] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:09.571 [2024-04-24 20:24:39.599900] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:09.571 [2024-04-24 20:24:39.599914] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:09.571 [2024-04-24 20:24:39.599924] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:09.571 [2024-04-24 20:24:39.599944] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:09.572 [2024-04-24 20:24:39.599956] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:09.572 [2024-04-24 20:24:39.599970] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:09.572 [2024-04-24 20:24:39.599981] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:09.572 [2024-04-24 20:24:39.599995] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:09.572 [2024-04-24 20:24:39.600007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.572 [2024-04-24 20:24:39.600025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:09.572 [2024-04-24 20:24:39.600037] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.937 ms 00:18:09.572 [2024-04-24 20:24:39.600049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.572 [2024-04-24 20:24:39.626852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.572 [2024-04-24 20:24:39.626918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:09.572 [2024-04-24 20:24:39.626937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.739 ms 00:18:09.572 [2024-04-24 20:24:39.626950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.572 [2024-04-24 20:24:39.627128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.572 [2024-04-24 20:24:39.627145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:09.572 [2024-04-24 20:24:39.627157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:09.572 [2024-04-24 20:24:39.627172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.572 [2024-04-24 20:24:39.684378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.572 [2024-04-24 20:24:39.684437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:09.572 [2024-04-24 20:24:39.684454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.246 ms 00:18:09.572 [2024-04-24 20:24:39.684468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.572 [2024-04-24 20:24:39.684603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.572 [2024-04-24 20:24:39.684618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:09.572 [2024-04-24 20:24:39.684632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:09.572 [2024-04-24 20:24:39.684645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.572 [2024-04-24 20:24:39.685153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.572 [2024-04-24 20:24:39.685182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:09.572 [2024-04-24 20:24:39.685194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.477 ms 00:18:09.572 [2024-04-24 20:24:39.685207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.572 [2024-04-24 20:24:39.685358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.572 [2024-04-24 20:24:39.685374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:09.572 [2024-04-24 20:24:39.685385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:18:09.572 [2024-04-24 20:24:39.685405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.572 [2024-04-24 20:24:39.720119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.572 [2024-04-24 20:24:39.720176] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:09.572 [2024-04-24 20:24:39.720197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.731 ms 00:18:09.572 [2024-04-24 20:24:39.720210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.572 [2024-04-24 20:24:39.736183] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:09.572 [2024-04-24 20:24:39.753259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.572 [2024-04-24 20:24:39.753315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:09.572 [2024-04-24 20:24:39.753334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.888 ms 00:18:09.572 [2024-04-24 20:24:39.753346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.831 [2024-04-24 20:24:39.840871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.831 [2024-04-24 20:24:39.840950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:09.831 [2024-04-24 20:24:39.840972] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.542 ms 00:18:09.831 [2024-04-24 20:24:39.840983] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.831 [2024-04-24 20:24:39.841163] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:09.831 [2024-04-24 20:24:39.841181] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:18:12.362 [2024-04-24 20:24:42.152007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.362 [2024-04-24 20:24:42.152075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:12.362 [2024-04-24 20:24:42.152097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2314.583 ms 00:18:12.362 [2024-04-24 20:24:42.152108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.362 [2024-04-24 20:24:42.152402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.362 [2024-04-24 20:24:42.152417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:12.362 [2024-04-24 20:24:42.152431] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:18:12.362 [2024-04-24 20:24:42.152441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.362 [2024-04-24 20:24:42.195071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.362 [2024-04-24 20:24:42.195135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:12.362 [2024-04-24 20:24:42.195157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.650 ms 00:18:12.362 [2024-04-24 20:24:42.195172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.362 [2024-04-24 20:24:42.237277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.362 [2024-04-24 20:24:42.237332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:12.362 [2024-04-24 20:24:42.237352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.011 ms 00:18:12.362 [2024-04-24 20:24:42.237363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.362 [2024-04-24 20:24:42.237947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.362 [2024-04-24 20:24:42.237966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:12.362 [2024-04-24 20:24:42.237987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:18:12.362 [2024-04-24 20:24:42.237996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.362 [2024-04-24 20:24:42.342938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.362 [2024-04-24 20:24:42.342996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:12.362 [2024-04-24 20:24:42.343017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.057 ms 00:18:12.362 [2024-04-24 20:24:42.343029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.362 [2024-04-24 20:24:42.387009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.362 [2024-04-24 20:24:42.387079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:12.362 [2024-04-24 20:24:42.387100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.900 ms 00:18:12.362 [2024-04-24 20:24:42.387112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.362 [2024-04-24 20:24:42.391969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.362 [2024-04-24 20:24:42.392013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:12.362 [2024-04-24 20:24:42.392030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.722 ms 00:18:12.362 [2024-04-24 20:24:42.392040] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.362 [2024-04-24 20:24:42.434296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.362 [2024-04-24 20:24:42.434370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:12.362 [2024-04-24 20:24:42.434390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.235 ms 00:18:12.362 [2024-04-24 20:24:42.434402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.362 [2024-04-24 20:24:42.434564] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.362 [2024-04-24 20:24:42.434580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:12.362 [2024-04-24 20:24:42.434599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:12.362 [2024-04-24 20:24:42.434610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.362 [2024-04-24 20:24:42.434709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.362 [2024-04-24 20:24:42.434720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:12.362 [2024-04-24 20:24:42.434734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:18:12.362 [2024-04-24 20:24:42.434754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.362 [2024-04-24 20:24:42.435823] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:12.362 [2024-04-24 20:24:42.442385] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2864.224 ms, result 0 00:18:12.362 [2024-04-24 20:24:42.443290] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:12.362 { 00:18:12.362 "name": "ftl0", 00:18:12.362 "uuid": "59cba280-e672-4cc5-80b9-6f15d46f3b14" 00:18:12.362 } 00:18:12.362 20:24:42 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:12.362 20:24:42 -- common/autotest_common.sh@885 -- # local bdev_name=ftl0 00:18:12.362 20:24:42 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:12.362 20:24:42 -- common/autotest_common.sh@887 -- # local i 00:18:12.362 20:24:42 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:12.362 20:24:42 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:12.362 20:24:42 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:12.620 20:24:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:12.620 [ 00:18:12.620 { 00:18:12.620 "name": "ftl0", 00:18:12.620 "aliases": [ 00:18:12.620 "59cba280-e672-4cc5-80b9-6f15d46f3b14" 00:18:12.620 ], 00:18:12.620 "product_name": "FTL disk", 00:18:12.620 "block_size": 4096, 00:18:12.620 "num_blocks": 23592960, 00:18:12.620 "uuid": "59cba280-e672-4cc5-80b9-6f15d46f3b14", 00:18:12.620 "assigned_rate_limits": { 00:18:12.620 "rw_ios_per_sec": 0, 00:18:12.620 "rw_mbytes_per_sec": 0, 00:18:12.620 "r_mbytes_per_sec": 0, 00:18:12.620 "w_mbytes_per_sec": 0 00:18:12.620 }, 00:18:12.620 "claimed": false, 00:18:12.620 "zoned": false, 00:18:12.620 "supported_io_types": { 00:18:12.620 "read": true, 00:18:12.620 "write": true, 00:18:12.620 "unmap": true, 00:18:12.620 "write_zeroes": true, 00:18:12.620 "flush": true, 00:18:12.620 "reset": false, 00:18:12.620 "compare": false, 00:18:12.620 "compare_and_write": false, 00:18:12.620 "abort": false, 00:18:12.620 "nvme_admin": false, 00:18:12.620 "nvme_io": false 00:18:12.620 }, 00:18:12.620 "driver_specific": { 00:18:12.620 "ftl": { 00:18:12.620 "base_bdev": "b5f20aea-3274-4f54-a96d-f226ea488112", 00:18:12.620 "cache": "nvc0n1p0" 00:18:12.620 } 00:18:12.620 } 00:18:12.620 } 00:18:12.620 ] 00:18:12.620 20:24:42 -- common/autotest_common.sh@893 -- # return 0 00:18:12.620 20:24:42 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:12.620 20:24:42 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:12.879 20:24:43 -- ftl/trim.sh@56 -- # echo ']}' 00:18:12.879 20:24:43 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:13.137 20:24:43 -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:13.137 { 00:18:13.137 "name": "ftl0", 00:18:13.137 "aliases": [ 00:18:13.137 "59cba280-e672-4cc5-80b9-6f15d46f3b14" 00:18:13.137 ], 00:18:13.137 "product_name": "FTL disk", 00:18:13.137 "block_size": 4096, 00:18:13.137 "num_blocks": 23592960, 00:18:13.137 "uuid": "59cba280-e672-4cc5-80b9-6f15d46f3b14", 00:18:13.137 "assigned_rate_limits": { 00:18:13.137 "rw_ios_per_sec": 0, 00:18:13.137 "rw_mbytes_per_sec": 0, 00:18:13.137 "r_mbytes_per_sec": 0, 00:18:13.137 "w_mbytes_per_sec": 0 00:18:13.137 }, 00:18:13.137 "claimed": false, 00:18:13.137 "zoned": false, 00:18:13.137 "supported_io_types": { 00:18:13.137 "read": true, 00:18:13.137 "write": true, 00:18:13.137 "unmap": true, 00:18:13.137 "write_zeroes": true, 00:18:13.137 "flush": true, 00:18:13.137 "reset": false, 00:18:13.137 "compare": false, 00:18:13.137 "compare_and_write": false, 00:18:13.137 "abort": false, 00:18:13.137 "nvme_admin": false, 00:18:13.137 "nvme_io": false 00:18:13.137 }, 00:18:13.137 "driver_specific": { 00:18:13.137 "ftl": { 00:18:13.137 "base_bdev": "b5f20aea-3274-4f54-a96d-f226ea488112", 00:18:13.137 "cache": "nvc0n1p0" 00:18:13.137 } 00:18:13.137 } 00:18:13.137 } 00:18:13.137 ]' 00:18:13.137 20:24:43 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:13.137 20:24:43 -- ftl/trim.sh@60 -- # nb=23592960 00:18:13.137 20:24:43 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:13.397 [2024-04-24 20:24:43.422887] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.397 [2024-04-24 20:24:43.422947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:13.397 [2024-04-24 20:24:43.422964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:13.397 [2024-04-24 20:24:43.422978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.397 [2024-04-24 20:24:43.423022] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:13.397 [2024-04-24 20:24:43.426779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.397 [2024-04-24 20:24:43.426821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:13.397 [2024-04-24 20:24:43.426840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.737 ms 00:18:13.397 [2024-04-24 20:24:43.426850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.397 [2024-04-24 20:24:43.427507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.397 [2024-04-24 20:24:43.427538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:13.397 [2024-04-24 20:24:43.427554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:18:13.397 [2024-04-24 20:24:43.427566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.397 [2024-04-24 20:24:43.430480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.397 [2024-04-24 20:24:43.430502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:13.397 [2024-04-24 20:24:43.430520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.882 ms 00:18:13.397 [2024-04-24 20:24:43.430530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.397 [2024-04-24 20:24:43.436442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.397 [2024-04-24 20:24:43.436486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:13.397 [2024-04-24 20:24:43.436502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.852 ms 00:18:13.397 [2024-04-24 20:24:43.436517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.397 [2024-04-24 20:24:43.478273] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.397 [2024-04-24 20:24:43.478331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:13.397 [2024-04-24 20:24:43.478351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.687 ms 00:18:13.397 [2024-04-24 20:24:43.478361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.397 [2024-04-24 20:24:43.502718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.397 [2024-04-24 20:24:43.502805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:13.397 [2024-04-24 20:24:43.502826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.256 ms 00:18:13.397 [2024-04-24 20:24:43.502837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.397 [2024-04-24 20:24:43.503146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.397 [2024-04-24 20:24:43.503162] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:13.397 [2024-04-24 20:24:43.503176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:18:13.397 [2024-04-24 20:24:43.503188] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.397 [2024-04-24 20:24:43.544603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.397 [2024-04-24 20:24:43.544669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:13.397 [2024-04-24 20:24:43.544688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.434 ms 00:18:13.397 [2024-04-24 20:24:43.544699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.397 [2024-04-24 20:24:43.588580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.397 [2024-04-24 20:24:43.588649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:13.397 [2024-04-24 20:24:43.588669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.799 ms 00:18:13.397 [2024-04-24 20:24:43.588681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.397 [2024-04-24 20:24:43.630560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.397 [2024-04-24 20:24:43.630637] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:13.658 [2024-04-24 20:24:43.630660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.794 ms 00:18:13.658 [2024-04-24 20:24:43.630673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.658 [2024-04-24 20:24:43.672894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.658 [2024-04-24 20:24:43.672957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:13.658 [2024-04-24 20:24:43.672977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.008 ms 00:18:13.658 [2024-04-24 20:24:43.672987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.658 [2024-04-24 20:24:43.673132] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:13.658 [2024-04-24 20:24:43.673152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:13.658 [2024-04-24 20:24:43.673971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.673986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.673998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:13.659 [2024-04-24 20:24:43.674453] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:13.659 [2024-04-24 20:24:43.674466] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 59cba280-e672-4cc5-80b9-6f15d46f3b14 00:18:13.659 [2024-04-24 20:24:43.674478] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:13.659 [2024-04-24 20:24:43.674490] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:13.659 [2024-04-24 20:24:43.674500] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:13.659 [2024-04-24 20:24:43.674514] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:13.659 [2024-04-24 20:24:43.674524] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:13.659 [2024-04-24 20:24:43.674540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:13.659 [2024-04-24 20:24:43.674550] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:13.659 [2024-04-24 20:24:43.674561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:13.659 [2024-04-24 20:24:43.674570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:13.659 [2024-04-24 20:24:43.674586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.659 [2024-04-24 20:24:43.674596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:13.659 [2024-04-24 20:24:43.674610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.460 ms 00:18:13.659 [2024-04-24 20:24:43.674620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.659 [2024-04-24 20:24:43.696079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.659 [2024-04-24 20:24:43.696141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:13.659 [2024-04-24 20:24:43.696160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.435 ms 00:18:13.659 [2024-04-24 20:24:43.696175] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.659 [2024-04-24 20:24:43.696531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.659 [2024-04-24 20:24:43.696546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:13.659 [2024-04-24 20:24:43.696560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:18:13.659 [2024-04-24 20:24:43.696570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.659 [2024-04-24 20:24:43.767153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.659 [2024-04-24 20:24:43.767217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.659 [2024-04-24 20:24:43.767240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.659 [2024-04-24 20:24:43.767251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.659 [2024-04-24 20:24:43.767410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.659 [2024-04-24 20:24:43.767429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.659 [2024-04-24 20:24:43.767446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.659 [2024-04-24 20:24:43.767477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.659 [2024-04-24 20:24:43.767576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.659 [2024-04-24 20:24:43.767595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.659 [2024-04-24 20:24:43.767609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.659 [2024-04-24 20:24:43.767620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.659 [2024-04-24 20:24:43.767664] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.659 [2024-04-24 20:24:43.767676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.659 [2024-04-24 20:24:43.767689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.659 [2024-04-24 20:24:43.767699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.919 [2024-04-24 20:24:43.901917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.919 [2024-04-24 20:24:43.901969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.919 [2024-04-24 20:24:43.901988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.919 [2024-04-24 20:24:43.902002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.919 [2024-04-24 20:24:43.950545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.919 [2024-04-24 20:24:43.950607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.919 [2024-04-24 20:24:43.950626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.919 [2024-04-24 20:24:43.950637] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.919 [2024-04-24 20:24:43.950768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.919 [2024-04-24 20:24:43.950798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.919 [2024-04-24 20:24:43.950812] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.919 [2024-04-24 20:24:43.950823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.919 [2024-04-24 20:24:43.950902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.919 [2024-04-24 20:24:43.950915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.919 [2024-04-24 20:24:43.950929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.919 [2024-04-24 20:24:43.950940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.919 [2024-04-24 20:24:43.951083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.919 [2024-04-24 20:24:43.951097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.919 [2024-04-24 20:24:43.951127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.919 [2024-04-24 20:24:43.951138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.919 [2024-04-24 20:24:43.951211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.919 [2024-04-24 20:24:43.951231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:13.919 [2024-04-24 20:24:43.951244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.919 [2024-04-24 20:24:43.951255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.919 [2024-04-24 20:24:43.951310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.919 [2024-04-24 20:24:43.951321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.919 [2024-04-24 20:24:43.951334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.919 [2024-04-24 20:24:43.951345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.919 [2024-04-24 20:24:43.951408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.919 [2024-04-24 20:24:43.951420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.919 [2024-04-24 20:24:43.951434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.919 [2024-04-24 20:24:43.951445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.920 [2024-04-24 20:24:43.951662] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 529.602 ms, result 0 00:18:13.920 true 00:18:13.920 20:24:43 -- ftl/trim.sh@63 -- # killprocess 78391 00:18:13.920 20:24:43 -- common/autotest_common.sh@936 -- # '[' -z 78391 ']' 00:18:13.920 20:24:43 -- common/autotest_common.sh@940 -- # kill -0 78391 00:18:13.920 20:24:43 -- common/autotest_common.sh@941 -- # uname 00:18:13.920 20:24:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:13.920 20:24:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78391 00:18:13.920 20:24:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:13.920 20:24:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:13.920 killing process with pid 78391 00:18:13.920 20:24:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78391' 00:18:13.920 20:24:44 -- common/autotest_common.sh@955 -- # kill 78391 00:18:13.920 20:24:44 -- common/autotest_common.sh@960 -- # wait 78391 00:18:20.514 20:24:49 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:20.774 65536+0 records in 00:18:20.774 65536+0 records out 00:18:20.774 268435456 bytes (268 MB, 256 MiB) copied, 0.980288 s, 274 MB/s 00:18:20.774 20:24:50 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:20.774 [2024-04-24 20:24:50.982874] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:18:20.774 [2024-04-24 20:24:50.982983] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78596 ] 00:18:21.033 [2024-04-24 20:24:51.150179] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.319 [2024-04-24 20:24:51.394159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.615 [2024-04-24 20:24:51.803600] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:21.615 [2024-04-24 20:24:51.803676] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:21.876 [2024-04-24 20:24:51.964513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.876 [2024-04-24 20:24:51.964576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:21.876 [2024-04-24 20:24:51.964593] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:21.876 [2024-04-24 20:24:51.964607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.876 [2024-04-24 20:24:51.968118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.876 [2024-04-24 20:24:51.968156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:21.876 [2024-04-24 20:24:51.968169] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.495 ms 00:18:21.876 [2024-04-24 20:24:51.968178] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.876 [2024-04-24 20:24:51.968289] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:21.876 [2024-04-24 20:24:51.969443] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:21.876 [2024-04-24 20:24:51.969476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.876 [2024-04-24 20:24:51.969487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:21.876 [2024-04-24 20:24:51.969501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.198 ms 00:18:21.876 [2024-04-24 20:24:51.969511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.876 [2024-04-24 20:24:51.971037] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:21.876 [2024-04-24 20:24:51.990462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.876 [2024-04-24 20:24:51.990497] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:21.876 [2024-04-24 20:24:51.990511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.458 ms 00:18:21.876 [2024-04-24 20:24:51.990522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.876 [2024-04-24 20:24:51.990615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.876 [2024-04-24 20:24:51.990629] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:21.876 [2024-04-24 20:24:51.990648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:21.876 [2024-04-24 20:24:51.990662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.876 [2024-04-24 20:24:51.997427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.876 [2024-04-24 20:24:51.997456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:21.876 [2024-04-24 20:24:51.997468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.734 ms 00:18:21.876 [2024-04-24 20:24:51.997478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.876 [2024-04-24 20:24:51.997589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.876 [2024-04-24 20:24:51.997607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:21.876 [2024-04-24 20:24:51.997619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:21.876 [2024-04-24 20:24:51.997634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.876 [2024-04-24 20:24:51.997672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.876 [2024-04-24 20:24:51.997685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:21.876 [2024-04-24 20:24:51.997695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:21.876 [2024-04-24 20:24:51.997704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.876 [2024-04-24 20:24:51.997729] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:21.876 [2024-04-24 20:24:52.003462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.876 [2024-04-24 20:24:52.003489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:21.876 [2024-04-24 20:24:52.003501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.749 ms 00:18:21.876 [2024-04-24 20:24:52.003511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.876 [2024-04-24 20:24:52.003580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.876 [2024-04-24 20:24:52.003592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:21.876 [2024-04-24 20:24:52.003603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:21.876 [2024-04-24 20:24:52.003613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.876 [2024-04-24 20:24:52.003632] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:21.876 [2024-04-24 20:24:52.003655] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:21.876 [2024-04-24 20:24:52.003688] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:21.876 [2024-04-24 20:24:52.003709] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:21.876 [2024-04-24 20:24:52.003775] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:21.876 [2024-04-24 20:24:52.003788] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:21.876 [2024-04-24 20:24:52.003801] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:21.876 [2024-04-24 20:24:52.003814] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:21.876 [2024-04-24 20:24:52.003825] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:21.876 [2024-04-24 20:24:52.003837] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:21.876 [2024-04-24 20:24:52.003847] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:21.876 [2024-04-24 20:24:52.003866] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:21.876 [2024-04-24 20:24:52.003876] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:21.876 [2024-04-24 20:24:52.003886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.876 [2024-04-24 20:24:52.003900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:21.876 [2024-04-24 20:24:52.003914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:18:21.876 [2024-04-24 20:24:52.003923] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.876 [2024-04-24 20:24:52.003983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.876 [2024-04-24 20:24:52.003994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:21.876 [2024-04-24 20:24:52.004004] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:21.876 [2024-04-24 20:24:52.004014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.876 [2024-04-24 20:24:52.004081] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:21.876 [2024-04-24 20:24:52.004093] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:21.876 [2024-04-24 20:24:52.004107] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:21.876 [2024-04-24 20:24:52.004117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.876 [2024-04-24 20:24:52.004127] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:21.876 [2024-04-24 20:24:52.004137] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:21.876 [2024-04-24 20:24:52.004146] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:21.876 [2024-04-24 20:24:52.004156] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:21.876 [2024-04-24 20:24:52.004166] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:21.876 [2024-04-24 20:24:52.004175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:21.877 [2024-04-24 20:24:52.004196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:21.877 [2024-04-24 20:24:52.004206] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:21.877 [2024-04-24 20:24:52.004215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:21.877 [2024-04-24 20:24:52.004224] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:21.877 [2024-04-24 20:24:52.004233] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:21.877 [2024-04-24 20:24:52.004242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.877 [2024-04-24 20:24:52.004252] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:21.877 [2024-04-24 20:24:52.004261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:21.877 [2024-04-24 20:24:52.004270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.877 [2024-04-24 20:24:52.004279] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:21.877 [2024-04-24 20:24:52.004289] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:21.877 [2024-04-24 20:24:52.004298] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:21.877 [2024-04-24 20:24:52.004307] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:21.877 [2024-04-24 20:24:52.004316] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:21.877 [2024-04-24 20:24:52.004325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:21.877 [2024-04-24 20:24:52.004334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:21.877 [2024-04-24 20:24:52.004343] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:21.877 [2024-04-24 20:24:52.004352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:21.877 [2024-04-24 20:24:52.004360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:21.877 [2024-04-24 20:24:52.004369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:21.877 [2024-04-24 20:24:52.004378] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:21.877 [2024-04-24 20:24:52.004387] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:21.877 [2024-04-24 20:24:52.004396] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:21.877 [2024-04-24 20:24:52.004404] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:21.877 [2024-04-24 20:24:52.004413] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:21.877 [2024-04-24 20:24:52.004422] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:21.877 [2024-04-24 20:24:52.004431] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:21.877 [2024-04-24 20:24:52.004440] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:21.877 [2024-04-24 20:24:52.004449] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:21.877 [2024-04-24 20:24:52.004457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:21.877 [2024-04-24 20:24:52.004466] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:21.877 [2024-04-24 20:24:52.004477] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:21.877 [2024-04-24 20:24:52.004487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:21.877 [2024-04-24 20:24:52.004497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:21.877 [2024-04-24 20:24:52.004507] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:21.877 [2024-04-24 20:24:52.004516] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:21.877 [2024-04-24 20:24:52.004525] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:21.877 [2024-04-24 20:24:52.004535] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:21.877 [2024-04-24 20:24:52.004544] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:21.877 [2024-04-24 20:24:52.004553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:21.877 [2024-04-24 20:24:52.004563] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:21.877 [2024-04-24 20:24:52.004576] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:21.877 [2024-04-24 20:24:52.004587] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:21.877 [2024-04-24 20:24:52.004597] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:21.877 [2024-04-24 20:24:52.004608] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:21.877 [2024-04-24 20:24:52.004618] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:21.877 [2024-04-24 20:24:52.004628] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:21.877 [2024-04-24 20:24:52.004638] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:21.877 [2024-04-24 20:24:52.004648] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:21.877 [2024-04-24 20:24:52.004658] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:21.877 [2024-04-24 20:24:52.004668] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:21.877 [2024-04-24 20:24:52.004678] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:21.877 [2024-04-24 20:24:52.004688] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:21.877 [2024-04-24 20:24:52.004698] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:21.877 [2024-04-24 20:24:52.004708] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:21.877 [2024-04-24 20:24:52.004718] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:21.877 [2024-04-24 20:24:52.004730] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:21.877 [2024-04-24 20:24:52.004741] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:21.877 [2024-04-24 20:24:52.004751] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:21.877 [2024-04-24 20:24:52.004761] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:21.877 [2024-04-24 20:24:52.004772] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:21.877 [2024-04-24 20:24:52.004782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.877 [2024-04-24 20:24:52.004798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:21.877 [2024-04-24 20:24:52.004808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:18:21.877 [2024-04-24 20:24:52.004817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.877 [2024-04-24 20:24:52.029916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.877 [2024-04-24 20:24:52.029952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:21.877 [2024-04-24 20:24:52.029965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.080 ms 00:18:21.877 [2024-04-24 20:24:52.029975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.877 [2024-04-24 20:24:52.030100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.877 [2024-04-24 20:24:52.030113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:21.877 [2024-04-24 20:24:52.030125] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:21.877 [2024-04-24 20:24:52.030135] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.877 [2024-04-24 20:24:52.093459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.877 [2024-04-24 20:24:52.093506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:21.877 [2024-04-24 20:24:52.093522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.402 ms 00:18:21.877 [2024-04-24 20:24:52.093532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.877 [2024-04-24 20:24:52.093630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.877 [2024-04-24 20:24:52.093642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:21.877 [2024-04-24 20:24:52.093654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:21.877 [2024-04-24 20:24:52.093664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.877 [2024-04-24 20:24:52.094138] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.877 [2024-04-24 20:24:52.094159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:21.877 [2024-04-24 20:24:52.094171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:18:21.877 [2024-04-24 20:24:52.094181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.877 [2024-04-24 20:24:52.094302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.877 [2024-04-24 20:24:52.094316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:21.877 [2024-04-24 20:24:52.094332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:21.877 [2024-04-24 20:24:52.094348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.137 [2024-04-24 20:24:52.117537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.137 [2024-04-24 20:24:52.117575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.137 [2024-04-24 20:24:52.117589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.201 ms 00:18:22.137 [2024-04-24 20:24:52.117599] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.137 [2024-04-24 20:24:52.137128] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:22.137 [2024-04-24 20:24:52.137163] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:22.137 [2024-04-24 20:24:52.137182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.137 [2024-04-24 20:24:52.137193] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:22.137 [2024-04-24 20:24:52.137205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.477 ms 00:18:22.137 [2024-04-24 20:24:52.137215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.137 [2024-04-24 20:24:52.168784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.137 [2024-04-24 20:24:52.168826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:22.137 [2024-04-24 20:24:52.168841] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.540 ms 00:18:22.137 [2024-04-24 20:24:52.168865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.137 [2024-04-24 20:24:52.189921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.137 [2024-04-24 20:24:52.189982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:22.137 [2024-04-24 20:24:52.189997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.980 ms 00:18:22.137 [2024-04-24 20:24:52.190008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.137 [2024-04-24 20:24:52.210636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.137 [2024-04-24 20:24:52.210683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:22.137 [2024-04-24 20:24:52.210698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.570 ms 00:18:22.137 [2024-04-24 20:24:52.210708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.137 [2024-04-24 20:24:52.211312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.137 [2024-04-24 20:24:52.211339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:22.137 [2024-04-24 20:24:52.211352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:18:22.137 [2024-04-24 20:24:52.211363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.137 [2024-04-24 20:24:52.308451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.137 [2024-04-24 20:24:52.308506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:22.137 [2024-04-24 20:24:52.308523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.210 ms 00:18:22.137 [2024-04-24 20:24:52.308534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.137 [2024-04-24 20:24:52.321642] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:22.137 [2024-04-24 20:24:52.338289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.137 [2024-04-24 20:24:52.338338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:22.137 [2024-04-24 20:24:52.338352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.669 ms 00:18:22.137 [2024-04-24 20:24:52.338363] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.137 [2024-04-24 20:24:52.338469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.137 [2024-04-24 20:24:52.338481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:22.137 [2024-04-24 20:24:52.338493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:22.137 [2024-04-24 20:24:52.338503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.137 [2024-04-24 20:24:52.338556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.137 [2024-04-24 20:24:52.338568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:22.137 [2024-04-24 20:24:52.338582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:22.137 [2024-04-24 20:24:52.338591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.137 [2024-04-24 20:24:52.340909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.137 [2024-04-24 20:24:52.340942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:22.137 [2024-04-24 20:24:52.340955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.299 ms 00:18:22.137 [2024-04-24 20:24:52.340965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.137 [2024-04-24 20:24:52.341006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.137 [2024-04-24 20:24:52.341018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:22.137 [2024-04-24 20:24:52.341028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:22.137 [2024-04-24 20:24:52.341038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.137 [2024-04-24 20:24:52.341076] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:22.137 [2024-04-24 20:24:52.341088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.137 [2024-04-24 20:24:52.341098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:22.137 [2024-04-24 20:24:52.341108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:22.137 [2024-04-24 20:24:52.341118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.395 [2024-04-24 20:24:52.380432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.396 [2024-04-24 20:24:52.380481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:22.396 [2024-04-24 20:24:52.380504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.350 ms 00:18:22.396 [2024-04-24 20:24:52.380531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.396 [2024-04-24 20:24:52.380662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.396 [2024-04-24 20:24:52.380686] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:22.396 [2024-04-24 20:24:52.380697] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:22.396 [2024-04-24 20:24:52.380708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.396 [2024-04-24 20:24:52.381774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.396 [2024-04-24 20:24:52.387610] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 417.638 ms, result 0 00:18:22.396 [2024-04-24 20:24:52.388375] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:22.396 [2024-04-24 20:24:52.406900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:31.287  Copying: 29/256 [MB] (29 MBps) Copying: 59/256 [MB] (30 MBps) Copying: 89/256 [MB] (29 MBps) Copying: 122/256 [MB] (32 MBps) Copying: 152/256 [MB] (29 MBps) Copying: 180/256 [MB] (28 MBps) Copying: 208/256 [MB] (27 MBps) Copying: 234/256 [MB] (25 MBps) Copying: 256/256 [MB] (average 29 MBps)[2024-04-24 20:25:01.198649] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:31.287 [2024-04-24 20:25:01.213210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.287 [2024-04-24 20:25:01.213255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:31.287 [2024-04-24 20:25:01.213272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:31.287 [2024-04-24 20:25:01.213282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.287 [2024-04-24 20:25:01.213306] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:31.287 [2024-04-24 20:25:01.216931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.287 [2024-04-24 20:25:01.216958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:31.287 [2024-04-24 20:25:01.216980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.612 ms 00:18:31.287 [2024-04-24 20:25:01.216990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.287 [2024-04-24 20:25:01.219073] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.287 [2024-04-24 20:25:01.219109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:31.287 [2024-04-24 20:25:01.219122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.060 ms 00:18:31.287 [2024-04-24 20:25:01.219133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.287 [2024-04-24 20:25:01.225906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.287 [2024-04-24 20:25:01.225946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:31.287 [2024-04-24 20:25:01.225958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.764 ms 00:18:31.287 [2024-04-24 20:25:01.225969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.287 [2024-04-24 20:25:01.231620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.287 [2024-04-24 20:25:01.231650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:31.287 [2024-04-24 20:25:01.231662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.600 ms 00:18:31.287 [2024-04-24 20:25:01.231672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.287 [2024-04-24 20:25:01.270740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.287 [2024-04-24 20:25:01.270800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:31.287 [2024-04-24 20:25:01.270816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.087 ms 00:18:31.287 [2024-04-24 20:25:01.270826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.287 [2024-04-24 20:25:01.294309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.287 [2024-04-24 20:25:01.294365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:31.287 [2024-04-24 20:25:01.294381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.413 ms 00:18:31.287 [2024-04-24 20:25:01.294393] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.287 [2024-04-24 20:25:01.294552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.287 [2024-04-24 20:25:01.294574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:31.287 [2024-04-24 20:25:01.294597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:18:31.287 [2024-04-24 20:25:01.294608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.287 [2024-04-24 20:25:01.335422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.287 [2024-04-24 20:25:01.335480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:31.287 [2024-04-24 20:25:01.335497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.859 ms 00:18:31.287 [2024-04-24 20:25:01.335508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.287 [2024-04-24 20:25:01.373791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.287 [2024-04-24 20:25:01.373847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:31.287 [2024-04-24 20:25:01.373870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.252 ms 00:18:31.287 [2024-04-24 20:25:01.373881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.287 [2024-04-24 20:25:01.411862] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.287 [2024-04-24 20:25:01.411934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:31.287 [2024-04-24 20:25:01.411950] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.953 ms 00:18:31.287 [2024-04-24 20:25:01.411960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.287 [2024-04-24 20:25:01.449805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.287 [2024-04-24 20:25:01.449863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:31.287 [2024-04-24 20:25:01.449879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.798 ms 00:18:31.287 [2024-04-24 20:25:01.449891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.287 [2024-04-24 20:25:01.449964] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:31.287 [2024-04-24 20:25:01.449985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.449998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:31.287 [2024-04-24 20:25:01.450367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.450988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.451000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.451027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.451039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.451050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.451061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.451072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.451084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.451094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:31.288 [2024-04-24 20:25:01.451114] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:31.288 [2024-04-24 20:25:01.451130] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 59cba280-e672-4cc5-80b9-6f15d46f3b14 00:18:31.288 [2024-04-24 20:25:01.451141] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:31.288 [2024-04-24 20:25:01.451151] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:31.288 [2024-04-24 20:25:01.451161] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:31.288 [2024-04-24 20:25:01.451171] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:31.288 [2024-04-24 20:25:01.451181] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:31.288 [2024-04-24 20:25:01.451193] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:31.288 [2024-04-24 20:25:01.451203] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:31.288 [2024-04-24 20:25:01.451212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:31.288 [2024-04-24 20:25:01.451221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:31.288 [2024-04-24 20:25:01.451231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.288 [2024-04-24 20:25:01.451242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:31.288 [2024-04-24 20:25:01.451253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.271 ms 00:18:31.288 [2024-04-24 20:25:01.451263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.288 [2024-04-24 20:25:01.470646] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.288 [2024-04-24 20:25:01.470692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:31.288 [2024-04-24 20:25:01.470706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.388 ms 00:18:31.288 [2024-04-24 20:25:01.470716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.288 [2024-04-24 20:25:01.471057] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:31.288 [2024-04-24 20:25:01.471072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:31.288 [2024-04-24 20:25:01.471084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:18:31.288 [2024-04-24 20:25:01.471098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.548 [2024-04-24 20:25:01.530443] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.548 [2024-04-24 20:25:01.530504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:31.548 [2024-04-24 20:25:01.530519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.548 [2024-04-24 20:25:01.530529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.548 [2024-04-24 20:25:01.530635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.548 [2024-04-24 20:25:01.530647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:31.548 [2024-04-24 20:25:01.530658] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.548 [2024-04-24 20:25:01.530672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.549 [2024-04-24 20:25:01.530727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.549 [2024-04-24 20:25:01.530739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:31.549 [2024-04-24 20:25:01.530761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.549 [2024-04-24 20:25:01.530771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.549 [2024-04-24 20:25:01.530807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.549 [2024-04-24 20:25:01.530817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:31.549 [2024-04-24 20:25:01.530829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.549 [2024-04-24 20:25:01.530838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.549 [2024-04-24 20:25:01.652332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.549 [2024-04-24 20:25:01.652391] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:31.549 [2024-04-24 20:25:01.652407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.549 [2024-04-24 20:25:01.652418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.549 [2024-04-24 20:25:01.701422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.549 [2024-04-24 20:25:01.701483] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:31.549 [2024-04-24 20:25:01.701514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.549 [2024-04-24 20:25:01.701537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.549 [2024-04-24 20:25:01.701606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.549 [2024-04-24 20:25:01.701618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:31.549 [2024-04-24 20:25:01.701629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.549 [2024-04-24 20:25:01.701640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.549 [2024-04-24 20:25:01.701670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.549 [2024-04-24 20:25:01.701681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:31.549 [2024-04-24 20:25:01.701692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.549 [2024-04-24 20:25:01.701702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.549 [2024-04-24 20:25:01.701817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.549 [2024-04-24 20:25:01.701830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:31.549 [2024-04-24 20:25:01.701840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.549 [2024-04-24 20:25:01.701850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.549 [2024-04-24 20:25:01.701922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.549 [2024-04-24 20:25:01.701936] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:31.549 [2024-04-24 20:25:01.701947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.549 [2024-04-24 20:25:01.701962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.549 [2024-04-24 20:25:01.702007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.549 [2024-04-24 20:25:01.702018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:31.549 [2024-04-24 20:25:01.702029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.549 [2024-04-24 20:25:01.702039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.549 [2024-04-24 20:25:01.702084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:31.549 [2024-04-24 20:25:01.702096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:31.549 [2024-04-24 20:25:01.702106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:31.549 [2024-04-24 20:25:01.702116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:31.549 [2024-04-24 20:25:01.702261] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 489.838 ms, result 0 00:18:33.455 00:18:33.455 00:18:33.455 20:25:03 -- ftl/trim.sh@72 -- # svcpid=78725 00:18:33.455 20:25:03 -- ftl/trim.sh@73 -- # waitforlisten 78725 00:18:33.455 20:25:03 -- common/autotest_common.sh@817 -- # '[' -z 78725 ']' 00:18:33.455 20:25:03 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:33.455 20:25:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.455 20:25:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:33.455 20:25:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.455 20:25:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:33.455 20:25:03 -- common/autotest_common.sh@10 -- # set +x 00:18:33.455 [2024-04-24 20:25:03.313942] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:18:33.455 [2024-04-24 20:25:03.314057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78725 ] 00:18:33.455 [2024-04-24 20:25:03.489666] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.715 [2024-04-24 20:25:03.737209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.649 20:25:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:34.649 20:25:04 -- common/autotest_common.sh@850 -- # return 0 00:18:34.649 20:25:04 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:34.907 [2024-04-24 20:25:04.968090] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:34.907 [2024-04-24 20:25:04.968176] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:35.167 [2024-04-24 20:25:05.154428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.167 [2024-04-24 20:25:05.154501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:35.167 [2024-04-24 20:25:05.154524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:35.167 [2024-04-24 20:25:05.154535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.167 [2024-04-24 20:25:05.158131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.167 [2024-04-24 20:25:05.158188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.167 [2024-04-24 20:25:05.158205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.571 ms 00:18:35.167 [2024-04-24 20:25:05.158220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.167 [2024-04-24 20:25:05.158383] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:35.167 [2024-04-24 20:25:05.159676] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:35.167 [2024-04-24 20:25:05.159722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.167 [2024-04-24 20:25:05.159737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.167 [2024-04-24 20:25:05.159752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:18:35.167 [2024-04-24 20:25:05.159763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.167 [2024-04-24 20:25:05.161413] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:35.167 [2024-04-24 20:25:05.182064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.167 [2024-04-24 20:25:05.182157] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:35.167 [2024-04-24 20:25:05.182177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.684 ms 00:18:35.168 [2024-04-24 20:25:05.182191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.168 [2024-04-24 20:25:05.182368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.168 [2024-04-24 20:25:05.182390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:35.168 [2024-04-24 20:25:05.182403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:35.168 [2024-04-24 20:25:05.182416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.168 [2024-04-24 20:25:05.190498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.168 [2024-04-24 20:25:05.190565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.168 [2024-04-24 20:25:05.190584] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.031 ms 00:18:35.168 [2024-04-24 20:25:05.190605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.168 [2024-04-24 20:25:05.190781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.168 [2024-04-24 20:25:05.190803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.168 [2024-04-24 20:25:05.190819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:35.168 [2024-04-24 20:25:05.190836] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.168 [2024-04-24 20:25:05.190893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.168 [2024-04-24 20:25:05.190913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:35.168 [2024-04-24 20:25:05.190928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:35.168 [2024-04-24 20:25:05.190944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.168 [2024-04-24 20:25:05.190994] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:35.168 [2024-04-24 20:25:05.196786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.168 [2024-04-24 20:25:05.196849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.168 [2024-04-24 20:25:05.196878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.810 ms 00:18:35.168 [2024-04-24 20:25:05.196890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.168 [2024-04-24 20:25:05.197020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.168 [2024-04-24 20:25:05.197035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:35.168 [2024-04-24 20:25:05.197053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:35.168 [2024-04-24 20:25:05.197064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.168 [2024-04-24 20:25:05.197094] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:35.168 [2024-04-24 20:25:05.197119] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:35.168 [2024-04-24 20:25:05.197158] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:35.168 [2024-04-24 20:25:05.197191] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:35.168 [2024-04-24 20:25:05.197268] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:35.168 [2024-04-24 20:25:05.197282] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:35.168 [2024-04-24 20:25:05.197298] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:35.168 [2024-04-24 20:25:05.197313] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:35.168 [2024-04-24 20:25:05.197329] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:35.168 [2024-04-24 20:25:05.197341] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:35.168 [2024-04-24 20:25:05.197356] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:35.168 [2024-04-24 20:25:05.197366] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:35.168 [2024-04-24 20:25:05.197378] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:35.168 [2024-04-24 20:25:05.197390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.168 [2024-04-24 20:25:05.197409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:35.168 [2024-04-24 20:25:05.197420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:18:35.168 [2024-04-24 20:25:05.197433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.168 [2024-04-24 20:25:05.197497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.168 [2024-04-24 20:25:05.197512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:35.168 [2024-04-24 20:25:05.197523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:35.168 [2024-04-24 20:25:05.197537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.168 [2024-04-24 20:25:05.197613] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:35.168 [2024-04-24 20:25:05.197628] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:35.168 [2024-04-24 20:25:05.197643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.168 [2024-04-24 20:25:05.197658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.168 [2024-04-24 20:25:05.197670] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:35.168 [2024-04-24 20:25:05.197682] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:35.168 [2024-04-24 20:25:05.197693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:35.168 [2024-04-24 20:25:05.197705] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:35.168 [2024-04-24 20:25:05.197716] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:35.168 [2024-04-24 20:25:05.197731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.168 [2024-04-24 20:25:05.197741] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:35.168 [2024-04-24 20:25:05.197754] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:35.168 [2024-04-24 20:25:05.197764] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.168 [2024-04-24 20:25:05.197776] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:35.168 [2024-04-24 20:25:05.197786] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:35.168 [2024-04-24 20:25:05.197798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.168 [2024-04-24 20:25:05.197808] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:35.168 [2024-04-24 20:25:05.197820] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:35.168 [2024-04-24 20:25:05.197830] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.168 [2024-04-24 20:25:05.197864] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:35.168 [2024-04-24 20:25:05.197876] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:35.168 [2024-04-24 20:25:05.197888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:35.168 [2024-04-24 20:25:05.197900] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:35.168 [2024-04-24 20:25:05.197914] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:35.168 [2024-04-24 20:25:05.197923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:35.168 [2024-04-24 20:25:05.197938] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:35.168 [2024-04-24 20:25:05.197948] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:35.168 [2024-04-24 20:25:05.197968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:35.168 [2024-04-24 20:25:05.197985] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:35.168 [2024-04-24 20:25:05.197999] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:35.168 [2024-04-24 20:25:05.198009] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:35.168 [2024-04-24 20:25:05.198022] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:35.168 [2024-04-24 20:25:05.198032] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:35.168 [2024-04-24 20:25:05.198044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:35.168 [2024-04-24 20:25:05.198054] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:35.168 [2024-04-24 20:25:05.198066] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:35.168 [2024-04-24 20:25:05.198076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.168 [2024-04-24 20:25:05.198088] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:35.168 [2024-04-24 20:25:05.198098] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:35.168 [2024-04-24 20:25:05.198111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.168 [2024-04-24 20:25:05.198120] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:35.169 [2024-04-24 20:25:05.198137] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:35.169 [2024-04-24 20:25:05.198147] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.169 [2024-04-24 20:25:05.198161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.169 [2024-04-24 20:25:05.198172] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:35.169 [2024-04-24 20:25:05.198185] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:35.169 [2024-04-24 20:25:05.198195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:35.169 [2024-04-24 20:25:05.198207] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:35.169 [2024-04-24 20:25:05.198217] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:35.169 [2024-04-24 20:25:05.198229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:35.169 [2024-04-24 20:25:05.198240] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:35.169 [2024-04-24 20:25:05.198257] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.169 [2024-04-24 20:25:05.198269] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:35.169 [2024-04-24 20:25:05.198283] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:35.169 [2024-04-24 20:25:05.198295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:35.169 [2024-04-24 20:25:05.198310] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:35.169 [2024-04-24 20:25:05.198321] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:35.169 [2024-04-24 20:25:05.198338] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:35.169 [2024-04-24 20:25:05.198350] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:35.169 [2024-04-24 20:25:05.198364] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:35.169 [2024-04-24 20:25:05.198375] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:35.169 [2024-04-24 20:25:05.198388] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:35.169 [2024-04-24 20:25:05.198400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:35.169 [2024-04-24 20:25:05.198413] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:35.169 [2024-04-24 20:25:05.198425] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:35.169 [2024-04-24 20:25:05.198438] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:35.169 [2024-04-24 20:25:05.198451] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.169 [2024-04-24 20:25:05.198469] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:35.169 [2024-04-24 20:25:05.198481] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:35.169 [2024-04-24 20:25:05.198494] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:35.169 [2024-04-24 20:25:05.198507] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:35.169 [2024-04-24 20:25:05.198522] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.169 [2024-04-24 20:25:05.198538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:35.169 [2024-04-24 20:25:05.198554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.945 ms 00:18:35.169 [2024-04-24 20:25:05.198565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.169 [2024-04-24 20:25:05.225688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.169 [2024-04-24 20:25:05.225806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.169 [2024-04-24 20:25:05.225844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.095 ms 00:18:35.169 [2024-04-24 20:25:05.225878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.169 [2024-04-24 20:25:05.226131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.169 [2024-04-24 20:25:05.226160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:35.169 [2024-04-24 20:25:05.226186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:35.169 [2024-04-24 20:25:05.226204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.169 [2024-04-24 20:25:05.282559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.169 [2024-04-24 20:25:05.282627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.169 [2024-04-24 20:25:05.282654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.396 ms 00:18:35.169 [2024-04-24 20:25:05.282668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.169 [2024-04-24 20:25:05.282819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.169 [2024-04-24 20:25:05.282836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.169 [2024-04-24 20:25:05.282867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:35.169 [2024-04-24 20:25:05.282881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.169 [2024-04-24 20:25:05.283357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.169 [2024-04-24 20:25:05.283384] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.169 [2024-04-24 20:25:05.283403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:18:35.169 [2024-04-24 20:25:05.283419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.169 [2024-04-24 20:25:05.283563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.169 [2024-04-24 20:25:05.283583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.169 [2024-04-24 20:25:05.283601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:18:35.169 [2024-04-24 20:25:05.283614] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.169 [2024-04-24 20:25:05.308098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.169 [2024-04-24 20:25:05.308190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.169 [2024-04-24 20:25:05.308226] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.484 ms 00:18:35.169 [2024-04-24 20:25:05.308246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.169 [2024-04-24 20:25:05.329555] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:35.169 [2024-04-24 20:25:05.329635] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:35.169 [2024-04-24 20:25:05.329663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.169 [2024-04-24 20:25:05.329677] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:35.169 [2024-04-24 20:25:05.329698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.195 ms 00:18:35.169 [2024-04-24 20:25:05.329709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.169 [2024-04-24 20:25:05.367132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.169 [2024-04-24 20:25:05.367328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:35.169 [2024-04-24 20:25:05.367372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.209 ms 00:18:35.169 [2024-04-24 20:25:05.367396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.169 [2024-04-24 20:25:05.390809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.169 [2024-04-24 20:25:05.390926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:35.169 [2024-04-24 20:25:05.390962] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.193 ms 00:18:35.169 [2024-04-24 20:25:05.390981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.427 [2024-04-24 20:25:05.412946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.428 [2024-04-24 20:25:05.413031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:35.428 [2024-04-24 20:25:05.413064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.782 ms 00:18:35.428 [2024-04-24 20:25:05.413083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.428 [2024-04-24 20:25:05.413768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.428 [2024-04-24 20:25:05.413824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:35.428 [2024-04-24 20:25:05.413871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:18:35.428 [2024-04-24 20:25:05.413896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.428 [2024-04-24 20:25:05.522620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.428 [2024-04-24 20:25:05.522716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:35.428 [2024-04-24 20:25:05.522738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 108.827 ms 00:18:35.428 [2024-04-24 20:25:05.522774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.428 [2024-04-24 20:25:05.539848] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:35.428 [2024-04-24 20:25:05.558197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.428 [2024-04-24 20:25:05.558263] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:35.428 [2024-04-24 20:25:05.558280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.264 ms 00:18:35.428 [2024-04-24 20:25:05.558298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.428 [2024-04-24 20:25:05.558424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.428 [2024-04-24 20:25:05.558441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:35.428 [2024-04-24 20:25:05.558453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:35.428 [2024-04-24 20:25:05.558474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.428 [2024-04-24 20:25:05.558530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.428 [2024-04-24 20:25:05.558545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:35.428 [2024-04-24 20:25:05.558556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:35.428 [2024-04-24 20:25:05.558570] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.428 [2024-04-24 20:25:05.560778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.428 [2024-04-24 20:25:05.560824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:35.428 [2024-04-24 20:25:05.560838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.186 ms 00:18:35.428 [2024-04-24 20:25:05.560851] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.428 [2024-04-24 20:25:05.560903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.428 [2024-04-24 20:25:05.560917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:35.428 [2024-04-24 20:25:05.560928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:35.428 [2024-04-24 20:25:05.560941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.428 [2024-04-24 20:25:05.560981] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:35.428 [2024-04-24 20:25:05.560999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.428 [2024-04-24 20:25:05.561012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:35.428 [2024-04-24 20:25:05.561028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:35.428 [2024-04-24 20:25:05.561038] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.428 [2024-04-24 20:25:05.605021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.428 [2024-04-24 20:25:05.605099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:35.428 [2024-04-24 20:25:05.605121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.017 ms 00:18:35.428 [2024-04-24 20:25:05.605133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.428 [2024-04-24 20:25:05.605329] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.428 [2024-04-24 20:25:05.605348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:35.428 [2024-04-24 20:25:05.605363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:35.428 [2024-04-24 20:25:05.605375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.428 [2024-04-24 20:25:05.606446] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:35.428 [2024-04-24 20:25:05.613123] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 452.438 ms, result 0 00:18:35.428 [2024-04-24 20:25:05.614027] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:35.428 Some configs were skipped because the RPC state that can call them passed over. 00:18:35.686 20:25:05 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:35.944 [2024-04-24 20:25:05.988361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.944 [2024-04-24 20:25:05.988490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:35.944 [2024-04-24 20:25:05.988521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.782 ms 00:18:35.944 [2024-04-24 20:25:05.988546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.944 [2024-04-24 20:25:05.988614] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 42.050 ms, result 0 00:18:35.944 true 00:18:35.944 20:25:06 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:36.202 [2024-04-24 20:25:06.295421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:36.202 [2024-04-24 20:25:06.295510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:36.202 [2024-04-24 20:25:06.295545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.060 ms 00:18:36.202 [2024-04-24 20:25:06.295566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:36.202 [2024-04-24 20:25:06.295676] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 39.310 ms, result 0 00:18:36.202 true 00:18:36.202 20:25:06 -- ftl/trim.sh@81 -- # killprocess 78725 00:18:36.202 20:25:06 -- common/autotest_common.sh@936 -- # '[' -z 78725 ']' 00:18:36.202 20:25:06 -- common/autotest_common.sh@940 -- # kill -0 78725 00:18:36.202 20:25:06 -- common/autotest_common.sh@941 -- # uname 00:18:36.202 20:25:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:36.202 20:25:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78725 00:18:36.202 20:25:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:36.202 20:25:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:36.202 20:25:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78725' 00:18:36.202 killing process with pid 78725 00:18:36.202 20:25:06 -- common/autotest_common.sh@955 -- # kill 78725 00:18:36.202 20:25:06 -- common/autotest_common.sh@960 -- # wait 78725 00:18:37.579 [2024-04-24 20:25:07.554350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.579 [2024-04-24 20:25:07.554420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:37.579 [2024-04-24 20:25:07.554439] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:37.579 [2024-04-24 20:25:07.554453] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.579 [2024-04-24 20:25:07.554478] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:37.579 [2024-04-24 20:25:07.558338] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.579 [2024-04-24 20:25:07.558375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:37.579 [2024-04-24 20:25:07.558391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.842 ms 00:18:37.579 [2024-04-24 20:25:07.558405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.579 [2024-04-24 20:25:07.558706] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.579 [2024-04-24 20:25:07.558729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:37.579 [2024-04-24 20:25:07.558744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:18:37.579 [2024-04-24 20:25:07.558776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.579 [2024-04-24 20:25:07.562236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.579 [2024-04-24 20:25:07.562271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:37.579 [2024-04-24 20:25:07.562289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.440 ms 00:18:37.579 [2024-04-24 20:25:07.562301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.579 [2024-04-24 20:25:07.568467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.579 [2024-04-24 20:25:07.568506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:37.579 [2024-04-24 20:25:07.568525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.132 ms 00:18:37.579 [2024-04-24 20:25:07.568536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.579 [2024-04-24 20:25:07.585125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.579 [2024-04-24 20:25:07.585168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:37.579 [2024-04-24 20:25:07.585186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.522 ms 00:18:37.579 [2024-04-24 20:25:07.585197] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.579 [2024-04-24 20:25:07.597266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.579 [2024-04-24 20:25:07.597313] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:37.579 [2024-04-24 20:25:07.597332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.009 ms 00:18:37.579 [2024-04-24 20:25:07.597343] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.579 [2024-04-24 20:25:07.597507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.579 [2024-04-24 20:25:07.597521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:37.579 [2024-04-24 20:25:07.597540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:37.579 [2024-04-24 20:25:07.597551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.579 [2024-04-24 20:25:07.613754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.579 [2024-04-24 20:25:07.613797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:37.579 [2024-04-24 20:25:07.613829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.200 ms 00:18:37.579 [2024-04-24 20:25:07.613840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.579 [2024-04-24 20:25:07.630975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.579 [2024-04-24 20:25:07.631035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:37.579 [2024-04-24 20:25:07.631057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.093 ms 00:18:37.579 [2024-04-24 20:25:07.631068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.579 [2024-04-24 20:25:07.647990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.579 [2024-04-24 20:25:07.648068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:37.579 [2024-04-24 20:25:07.648090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.886 ms 00:18:37.579 [2024-04-24 20:25:07.648101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.579 [2024-04-24 20:25:07.665093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.579 [2024-04-24 20:25:07.665158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:37.579 [2024-04-24 20:25:07.665178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.908 ms 00:18:37.579 [2024-04-24 20:25:07.665189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.579 [2024-04-24 20:25:07.665258] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:37.579 [2024-04-24 20:25:07.665280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:37.579 [2024-04-24 20:25:07.665582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.665998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:37.580 [2024-04-24 20:25:07.666647] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:37.580 [2024-04-24 20:25:07.666661] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 59cba280-e672-4cc5-80b9-6f15d46f3b14 00:18:37.580 [2024-04-24 20:25:07.666673] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:37.580 [2024-04-24 20:25:07.666689] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:37.580 [2024-04-24 20:25:07.666700] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:37.580 [2024-04-24 20:25:07.666715] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:37.580 [2024-04-24 20:25:07.666726] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:37.580 [2024-04-24 20:25:07.666743] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:37.580 [2024-04-24 20:25:07.666764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:37.580 [2024-04-24 20:25:07.666777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:37.580 [2024-04-24 20:25:07.666787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:37.580 [2024-04-24 20:25:07.666800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.580 [2024-04-24 20:25:07.666812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:37.580 [2024-04-24 20:25:07.666826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.547 ms 00:18:37.580 [2024-04-24 20:25:07.666837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.580 [2024-04-24 20:25:07.688161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.581 [2024-04-24 20:25:07.688221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:37.581 [2024-04-24 20:25:07.688241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.308 ms 00:18:37.581 [2024-04-24 20:25:07.688255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.581 [2024-04-24 20:25:07.688617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.581 [2024-04-24 20:25:07.688631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:37.581 [2024-04-24 20:25:07.688646] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:18:37.581 [2024-04-24 20:25:07.688656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.581 [2024-04-24 20:25:07.762550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.581 [2024-04-24 20:25:07.762613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:37.581 [2024-04-24 20:25:07.762637] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.581 [2024-04-24 20:25:07.762648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.581 [2024-04-24 20:25:07.762807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.581 [2024-04-24 20:25:07.762821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:37.581 [2024-04-24 20:25:07.762834] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.581 [2024-04-24 20:25:07.762845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.581 [2024-04-24 20:25:07.762918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.581 [2024-04-24 20:25:07.762932] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:37.581 [2024-04-24 20:25:07.762945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.581 [2024-04-24 20:25:07.762959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.581 [2024-04-24 20:25:07.762987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.581 [2024-04-24 20:25:07.762998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:37.581 [2024-04-24 20:25:07.763011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.581 [2024-04-24 20:25:07.763022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.839 [2024-04-24 20:25:07.895344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.839 [2024-04-24 20:25:07.895396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:37.839 [2024-04-24 20:25:07.895415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.839 [2024-04-24 20:25:07.895430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.839 [2024-04-24 20:25:07.945179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.839 [2024-04-24 20:25:07.945229] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:37.839 [2024-04-24 20:25:07.945247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.839 [2024-04-24 20:25:07.945258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.839 [2024-04-24 20:25:07.945358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.839 [2024-04-24 20:25:07.945371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:37.839 [2024-04-24 20:25:07.945384] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.839 [2024-04-24 20:25:07.945395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.839 [2024-04-24 20:25:07.945436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.839 [2024-04-24 20:25:07.945447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:37.839 [2024-04-24 20:25:07.945460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.840 [2024-04-24 20:25:07.945470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.840 [2024-04-24 20:25:07.945589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.840 [2024-04-24 20:25:07.945603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:37.840 [2024-04-24 20:25:07.945616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.840 [2024-04-24 20:25:07.945627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.840 [2024-04-24 20:25:07.945668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.840 [2024-04-24 20:25:07.945683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:37.840 [2024-04-24 20:25:07.945695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.840 [2024-04-24 20:25:07.945705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.840 [2024-04-24 20:25:07.945749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.840 [2024-04-24 20:25:07.945761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:37.840 [2024-04-24 20:25:07.945774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.840 [2024-04-24 20:25:07.945784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.840 [2024-04-24 20:25:07.945839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:37.840 [2024-04-24 20:25:07.945850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:37.840 [2024-04-24 20:25:07.945878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:37.840 [2024-04-24 20:25:07.945888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.840 [2024-04-24 20:25:07.946032] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 392.292 ms, result 0 00:18:39.215 20:25:09 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:39.215 20:25:09 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:39.215 [2024-04-24 20:25:09.376186] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:18:39.215 [2024-04-24 20:25:09.376312] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78796 ] 00:18:39.474 [2024-04-24 20:25:09.549633] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.732 [2024-04-24 20:25:09.801801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.991 [2024-04-24 20:25:10.225234] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:39.991 [2024-04-24 20:25:10.225329] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.251 [2024-04-24 20:25:10.383044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.251 [2024-04-24 20:25:10.383109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.251 [2024-04-24 20:25:10.383126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:40.251 [2024-04-24 20:25:10.383141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.251 [2024-04-24 20:25:10.386816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.251 [2024-04-24 20:25:10.386877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.251 [2024-04-24 20:25:10.386893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.656 ms 00:18:40.251 [2024-04-24 20:25:10.386904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.251 [2024-04-24 20:25:10.387142] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.251 [2024-04-24 20:25:10.388419] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.251 [2024-04-24 20:25:10.388450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.251 [2024-04-24 20:25:10.388462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.251 [2024-04-24 20:25:10.388478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.321 ms 00:18:40.251 [2024-04-24 20:25:10.388489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.251 [2024-04-24 20:25:10.390113] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:40.251 [2024-04-24 20:25:10.411496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.251 [2024-04-24 20:25:10.411565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:40.251 [2024-04-24 20:25:10.411583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.413 ms 00:18:40.251 [2024-04-24 20:25:10.411595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.251 [2024-04-24 20:25:10.411766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.251 [2024-04-24 20:25:10.411782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:40.251 [2024-04-24 20:25:10.411795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:40.251 [2024-04-24 20:25:10.411811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.251 [2024-04-24 20:25:10.419687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.251 [2024-04-24 20:25:10.419728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.251 [2024-04-24 20:25:10.419742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.818 ms 00:18:40.251 [2024-04-24 20:25:10.419753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.251 [2024-04-24 20:25:10.419903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.251 [2024-04-24 20:25:10.419923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.251 [2024-04-24 20:25:10.419940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:18:40.251 [2024-04-24 20:25:10.419963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.251 [2024-04-24 20:25:10.420002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.251 [2024-04-24 20:25:10.420013] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.251 [2024-04-24 20:25:10.420024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:40.251 [2024-04-24 20:25:10.420034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.252 [2024-04-24 20:25:10.420061] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:40.252 [2024-04-24 20:25:10.426269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.252 [2024-04-24 20:25:10.426314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.252 [2024-04-24 20:25:10.426328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.224 ms 00:18:40.252 [2024-04-24 20:25:10.426339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.252 [2024-04-24 20:25:10.426434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.252 [2024-04-24 20:25:10.426452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.252 [2024-04-24 20:25:10.426464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:40.252 [2024-04-24 20:25:10.426475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.252 [2024-04-24 20:25:10.426501] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:40.252 [2024-04-24 20:25:10.426526] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:40.252 [2024-04-24 20:25:10.426561] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:40.252 [2024-04-24 20:25:10.426579] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:40.252 [2024-04-24 20:25:10.426654] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:40.252 [2024-04-24 20:25:10.426667] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.252 [2024-04-24 20:25:10.426681] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:40.252 [2024-04-24 20:25:10.426695] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.252 [2024-04-24 20:25:10.426707] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.252 [2024-04-24 20:25:10.426718] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:40.252 [2024-04-24 20:25:10.426729] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.252 [2024-04-24 20:25:10.426739] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:40.252 [2024-04-24 20:25:10.426762] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:40.252 [2024-04-24 20:25:10.426774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.252 [2024-04-24 20:25:10.426785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.252 [2024-04-24 20:25:10.426798] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:18:40.252 [2024-04-24 20:25:10.426812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.252 [2024-04-24 20:25:10.426889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.252 [2024-04-24 20:25:10.426902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.252 [2024-04-24 20:25:10.426912] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:40.252 [2024-04-24 20:25:10.426922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.252 [2024-04-24 20:25:10.426997] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.252 [2024-04-24 20:25:10.427010] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.252 [2024-04-24 20:25:10.427021] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.252 [2024-04-24 20:25:10.427035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.252 [2024-04-24 20:25:10.427048] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.252 [2024-04-24 20:25:10.427058] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.252 [2024-04-24 20:25:10.427067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:40.252 [2024-04-24 20:25:10.427077] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.252 [2024-04-24 20:25:10.427087] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:40.252 [2024-04-24 20:25:10.427097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.252 [2024-04-24 20:25:10.427121] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.252 [2024-04-24 20:25:10.427131] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:40.252 [2024-04-24 20:25:10.427141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.252 [2024-04-24 20:25:10.427150] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.252 [2024-04-24 20:25:10.427160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:40.252 [2024-04-24 20:25:10.427170] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.252 [2024-04-24 20:25:10.427179] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.252 [2024-04-24 20:25:10.427189] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:40.252 [2024-04-24 20:25:10.427199] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.252 [2024-04-24 20:25:10.427209] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:40.252 [2024-04-24 20:25:10.427219] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:40.252 [2024-04-24 20:25:10.427229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:40.252 [2024-04-24 20:25:10.427238] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.252 [2024-04-24 20:25:10.427248] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:40.252 [2024-04-24 20:25:10.427258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.252 [2024-04-24 20:25:10.427267] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.252 [2024-04-24 20:25:10.427277] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:40.252 [2024-04-24 20:25:10.427286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.252 [2024-04-24 20:25:10.427296] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.252 [2024-04-24 20:25:10.427305] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:40.252 [2024-04-24 20:25:10.427315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.252 [2024-04-24 20:25:10.427324] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.252 [2024-04-24 20:25:10.427334] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:40.252 [2024-04-24 20:25:10.427344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:40.252 [2024-04-24 20:25:10.427354] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.252 [2024-04-24 20:25:10.427364] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:40.252 [2024-04-24 20:25:10.427374] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.252 [2024-04-24 20:25:10.427383] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.252 [2024-04-24 20:25:10.427393] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:40.252 [2024-04-24 20:25:10.427403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.252 [2024-04-24 20:25:10.427412] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.252 [2024-04-24 20:25:10.427422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.252 [2024-04-24 20:25:10.427432] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.252 [2024-04-24 20:25:10.427442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.252 [2024-04-24 20:25:10.427452] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.252 [2024-04-24 20:25:10.427462] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.252 [2024-04-24 20:25:10.427471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.252 [2024-04-24 20:25:10.427481] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.252 [2024-04-24 20:25:10.427491] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.252 [2024-04-24 20:25:10.427500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.252 [2024-04-24 20:25:10.427511] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.252 [2024-04-24 20:25:10.427525] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.252 [2024-04-24 20:25:10.427537] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:40.252 [2024-04-24 20:25:10.427547] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:40.252 [2024-04-24 20:25:10.427558] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:40.252 [2024-04-24 20:25:10.427570] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:40.252 [2024-04-24 20:25:10.427580] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:40.252 [2024-04-24 20:25:10.427591] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:40.252 [2024-04-24 20:25:10.427601] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:40.252 [2024-04-24 20:25:10.427612] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:40.252 [2024-04-24 20:25:10.427623] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:40.252 [2024-04-24 20:25:10.427633] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:40.252 [2024-04-24 20:25:10.427644] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:40.252 [2024-04-24 20:25:10.427655] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:40.252 [2024-04-24 20:25:10.427669] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:40.252 [2024-04-24 20:25:10.427680] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.252 [2024-04-24 20:25:10.427692] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.252 [2024-04-24 20:25:10.427703] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.253 [2024-04-24 20:25:10.427715] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.253 [2024-04-24 20:25:10.427726] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.253 [2024-04-24 20:25:10.427737] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.253 [2024-04-24 20:25:10.427748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.253 [2024-04-24 20:25:10.427759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.253 [2024-04-24 20:25:10.427775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:18:40.253 [2024-04-24 20:25:10.427786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.253 [2024-04-24 20:25:10.453776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.253 [2024-04-24 20:25:10.453835] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:40.253 [2024-04-24 20:25:10.453851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.977 ms 00:18:40.253 [2024-04-24 20:25:10.453872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.253 [2024-04-24 20:25:10.454039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.253 [2024-04-24 20:25:10.454052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:40.253 [2024-04-24 20:25:10.454065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:40.253 [2024-04-24 20:25:10.454076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.511 [2024-04-24 20:25:10.519083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.511 [2024-04-24 20:25:10.519138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:40.511 [2024-04-24 20:25:10.519155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.085 ms 00:18:40.511 [2024-04-24 20:25:10.519166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.511 [2024-04-24 20:25:10.519282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.511 [2024-04-24 20:25:10.519296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:40.511 [2024-04-24 20:25:10.519308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:40.511 [2024-04-24 20:25:10.519318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.511 [2024-04-24 20:25:10.519770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.511 [2024-04-24 20:25:10.519784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:40.511 [2024-04-24 20:25:10.519796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:18:40.511 [2024-04-24 20:25:10.519806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.511 [2024-04-24 20:25:10.519942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.511 [2024-04-24 20:25:10.519959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:40.511 [2024-04-24 20:25:10.519970] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:40.511 [2024-04-24 20:25:10.519981] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.511 [2024-04-24 20:25:10.544268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.511 [2024-04-24 20:25:10.544326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:40.511 [2024-04-24 20:25:10.544342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.298 ms 00:18:40.511 [2024-04-24 20:25:10.544353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.511 [2024-04-24 20:25:10.566460] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:40.511 [2024-04-24 20:25:10.566529] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:40.511 [2024-04-24 20:25:10.566551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.511 [2024-04-24 20:25:10.566564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:40.511 [2024-04-24 20:25:10.566580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.068 ms 00:18:40.511 [2024-04-24 20:25:10.566590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.511 [2024-04-24 20:25:10.600775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.512 [2024-04-24 20:25:10.600862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:40.512 [2024-04-24 20:25:10.600880] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.086 ms 00:18:40.512 [2024-04-24 20:25:10.600906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.512 [2024-04-24 20:25:10.623612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.512 [2024-04-24 20:25:10.623689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:40.512 [2024-04-24 20:25:10.623707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.563 ms 00:18:40.512 [2024-04-24 20:25:10.623719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.512 [2024-04-24 20:25:10.645796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.512 [2024-04-24 20:25:10.645872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:40.512 [2024-04-24 20:25:10.645890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.935 ms 00:18:40.512 [2024-04-24 20:25:10.645900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.512 [2024-04-24 20:25:10.646481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.512 [2024-04-24 20:25:10.646522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:40.512 [2024-04-24 20:25:10.646536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:18:40.512 [2024-04-24 20:25:10.646547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.770 [2024-04-24 20:25:10.748457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.770 [2024-04-24 20:25:10.748531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:40.770 [2024-04-24 20:25:10.748550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.044 ms 00:18:40.770 [2024-04-24 20:25:10.748562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.770 [2024-04-24 20:25:10.764327] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:40.770 [2024-04-24 20:25:10.781831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.770 [2024-04-24 20:25:10.781924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:40.770 [2024-04-24 20:25:10.781940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.163 ms 00:18:40.770 [2024-04-24 20:25:10.781951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.770 [2024-04-24 20:25:10.782074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.770 [2024-04-24 20:25:10.782088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:40.770 [2024-04-24 20:25:10.782101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:40.770 [2024-04-24 20:25:10.782112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.770 [2024-04-24 20:25:10.782170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.770 [2024-04-24 20:25:10.782181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:40.770 [2024-04-24 20:25:10.782196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:40.770 [2024-04-24 20:25:10.782206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.770 [2024-04-24 20:25:10.784349] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.770 [2024-04-24 20:25:10.784381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:40.770 [2024-04-24 20:25:10.784393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.123 ms 00:18:40.770 [2024-04-24 20:25:10.784403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.770 [2024-04-24 20:25:10.784436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.770 [2024-04-24 20:25:10.784447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:40.770 [2024-04-24 20:25:10.784458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:40.770 [2024-04-24 20:25:10.784473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.770 [2024-04-24 20:25:10.784506] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:40.770 [2024-04-24 20:25:10.784519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.770 [2024-04-24 20:25:10.784529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:40.770 [2024-04-24 20:25:10.784540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:40.770 [2024-04-24 20:25:10.784549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.770 [2024-04-24 20:25:10.825212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.770 [2024-04-24 20:25:10.825276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:40.770 [2024-04-24 20:25:10.825306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.699 ms 00:18:40.770 [2024-04-24 20:25:10.825317] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.770 [2024-04-24 20:25:10.825494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.770 [2024-04-24 20:25:10.825507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:40.770 [2024-04-24 20:25:10.825521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:40.770 [2024-04-24 20:25:10.825530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.770 [2024-04-24 20:25:10.826576] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:40.770 [2024-04-24 20:25:10.832712] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 443.925 ms, result 0 00:18:40.771 [2024-04-24 20:25:10.833716] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:40.771 [2024-04-24 20:25:10.852323] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:49.356  Copying: 34/256 [MB] (34 MBps) Copying: 65/256 [MB] (30 MBps) Copying: 95/256 [MB] (29 MBps) Copying: 122/256 [MB] (26 MBps) Copying: 148/256 [MB] (26 MBps) Copying: 178/256 [MB] (30 MBps) Copying: 210/256 [MB] (31 MBps) Copying: 237/256 [MB] (27 MBps) Copying: 256/256 [MB] (average 29 MBps)[2024-04-24 20:25:19.477943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:49.356 [2024-04-24 20:25:19.494840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.356 [2024-04-24 20:25:19.494918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:49.356 [2024-04-24 20:25:19.494941] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:49.356 [2024-04-24 20:25:19.494956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.356 [2024-04-24 20:25:19.494994] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:49.356 [2024-04-24 20:25:19.499617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.356 [2024-04-24 20:25:19.499667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:49.356 [2024-04-24 20:25:19.499696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.603 ms 00:18:49.356 [2024-04-24 20:25:19.499709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.356 [2024-04-24 20:25:19.500043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.356 [2024-04-24 20:25:19.500063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:49.356 [2024-04-24 20:25:19.500076] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:18:49.356 [2024-04-24 20:25:19.500087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.356 [2024-04-24 20:25:19.503214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.356 [2024-04-24 20:25:19.503243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:49.356 [2024-04-24 20:25:19.503257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.113 ms 00:18:49.356 [2024-04-24 20:25:19.503269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.356 [2024-04-24 20:25:19.509398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.356 [2024-04-24 20:25:19.509436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:49.356 [2024-04-24 20:25:19.509449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.111 ms 00:18:49.356 [2024-04-24 20:25:19.509461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.356 [2024-04-24 20:25:19.552630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.356 [2024-04-24 20:25:19.552705] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:49.356 [2024-04-24 20:25:19.552725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.156 ms 00:18:49.357 [2024-04-24 20:25:19.552737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.357 [2024-04-24 20:25:19.579566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.357 [2024-04-24 20:25:19.579639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:49.357 [2024-04-24 20:25:19.579659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.749 ms 00:18:49.357 [2024-04-24 20:25:19.579672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.357 [2024-04-24 20:25:19.579917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.357 [2024-04-24 20:25:19.579956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:49.357 [2024-04-24 20:25:19.579968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:18:49.357 [2024-04-24 20:25:19.579980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.616 [2024-04-24 20:25:19.624763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.616 [2024-04-24 20:25:19.624838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:49.616 [2024-04-24 20:25:19.624865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.829 ms 00:18:49.616 [2024-04-24 20:25:19.624878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.616 [2024-04-24 20:25:19.669520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.616 [2024-04-24 20:25:19.669596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:49.616 [2024-04-24 20:25:19.669615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.603 ms 00:18:49.616 [2024-04-24 20:25:19.669629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.616 [2024-04-24 20:25:19.716092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.616 [2024-04-24 20:25:19.716173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:49.616 [2024-04-24 20:25:19.716209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.423 ms 00:18:49.616 [2024-04-24 20:25:19.716233] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.616 [2024-04-24 20:25:19.760726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.616 [2024-04-24 20:25:19.760817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:49.616 [2024-04-24 20:25:19.760836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.392 ms 00:18:49.616 [2024-04-24 20:25:19.760848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.616 [2024-04-24 20:25:19.760991] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:49.616 [2024-04-24 20:25:19.761019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:49.616 [2024-04-24 20:25:19.761997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:49.617 [2024-04-24 20:25:19.762221] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:49.617 [2024-04-24 20:25:19.762238] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 59cba280-e672-4cc5-80b9-6f15d46f3b14 00:18:49.617 [2024-04-24 20:25:19.762250] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:49.617 [2024-04-24 20:25:19.762261] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:49.617 [2024-04-24 20:25:19.762272] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:49.617 [2024-04-24 20:25:19.762283] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:49.617 [2024-04-24 20:25:19.762294] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:49.617 [2024-04-24 20:25:19.762306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:49.617 [2024-04-24 20:25:19.762317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:49.617 [2024-04-24 20:25:19.762327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:49.617 [2024-04-24 20:25:19.762336] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:49.617 [2024-04-24 20:25:19.762347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.617 [2024-04-24 20:25:19.762358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:49.617 [2024-04-24 20:25:19.762369] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.359 ms 00:18:49.617 [2024-04-24 20:25:19.762380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.617 [2024-04-24 20:25:19.785302] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.617 [2024-04-24 20:25:19.785378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:49.617 [2024-04-24 20:25:19.785398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.923 ms 00:18:49.617 [2024-04-24 20:25:19.785410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.617 [2024-04-24 20:25:19.785783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:49.617 [2024-04-24 20:25:19.785796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:49.617 [2024-04-24 20:25:19.785809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:18:49.617 [2024-04-24 20:25:19.785833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.617 [2024-04-24 20:25:19.848838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.617 [2024-04-24 20:25:19.848944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:49.617 [2024-04-24 20:25:19.848963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.617 [2024-04-24 20:25:19.848976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.617 [2024-04-24 20:25:19.849170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.617 [2024-04-24 20:25:19.849183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:49.617 [2024-04-24 20:25:19.849194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.617 [2024-04-24 20:25:19.849214] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.617 [2024-04-24 20:25:19.849281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.617 [2024-04-24 20:25:19.849294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:49.617 [2024-04-24 20:25:19.849305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.617 [2024-04-24 20:25:19.849315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.617 [2024-04-24 20:25:19.849337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.617 [2024-04-24 20:25:19.849347] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:49.617 [2024-04-24 20:25:19.849358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.617 [2024-04-24 20:25:19.849368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.876 [2024-04-24 20:25:19.984014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.876 [2024-04-24 20:25:19.984099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:49.876 [2024-04-24 20:25:19.984118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.876 [2024-04-24 20:25:19.984130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.876 [2024-04-24 20:25:20.039080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.876 [2024-04-24 20:25:20.039163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:49.876 [2024-04-24 20:25:20.039183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.876 [2024-04-24 20:25:20.039212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.876 [2024-04-24 20:25:20.039312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.876 [2024-04-24 20:25:20.039325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:49.876 [2024-04-24 20:25:20.039337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.876 [2024-04-24 20:25:20.039348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.876 [2024-04-24 20:25:20.039383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.876 [2024-04-24 20:25:20.039395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:49.876 [2024-04-24 20:25:20.039406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.876 [2024-04-24 20:25:20.039417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.876 [2024-04-24 20:25:20.039577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.876 [2024-04-24 20:25:20.039594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:49.876 [2024-04-24 20:25:20.039607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.876 [2024-04-24 20:25:20.039635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.876 [2024-04-24 20:25:20.039684] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.876 [2024-04-24 20:25:20.039697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:49.876 [2024-04-24 20:25:20.039708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.876 [2024-04-24 20:25:20.039726] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.876 [2024-04-24 20:25:20.039782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.876 [2024-04-24 20:25:20.039797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:49.876 [2024-04-24 20:25:20.039808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.876 [2024-04-24 20:25:20.039819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.876 [2024-04-24 20:25:20.039889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:49.876 [2024-04-24 20:25:20.039924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:49.876 [2024-04-24 20:25:20.039936] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:49.876 [2024-04-24 20:25:20.039946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:49.876 [2024-04-24 20:25:20.040131] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 546.188 ms, result 0 00:18:51.255 00:18:51.255 00:18:51.255 20:25:21 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:51.255 20:25:21 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:51.824 20:25:21 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:51.824 [2024-04-24 20:25:21.893784] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:18:51.824 [2024-04-24 20:25:21.893964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78931 ] 00:18:52.083 [2024-04-24 20:25:22.081170] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.342 [2024-04-24 20:25:22.323715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.601 [2024-04-24 20:25:22.730769] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:52.601 [2024-04-24 20:25:22.730849] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:52.861 [2024-04-24 20:25:22.885513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.861 [2024-04-24 20:25:22.885585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:52.861 [2024-04-24 20:25:22.885603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:52.861 [2024-04-24 20:25:22.885617] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.861 [2024-04-24 20:25:22.888922] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.861 [2024-04-24 20:25:22.888970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:52.862 [2024-04-24 20:25:22.888985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.288 ms 00:18:52.862 [2024-04-24 20:25:22.888995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.862 [2024-04-24 20:25:22.889115] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:52.862 [2024-04-24 20:25:22.890371] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:52.862 [2024-04-24 20:25:22.890406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.862 [2024-04-24 20:25:22.890418] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:52.862 [2024-04-24 20:25:22.890434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:18:52.862 [2024-04-24 20:25:22.890445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.862 [2024-04-24 20:25:22.892003] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:52.862 [2024-04-24 20:25:22.913127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.862 [2024-04-24 20:25:22.913196] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:52.862 [2024-04-24 20:25:22.913212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.156 ms 00:18:52.862 [2024-04-24 20:25:22.913223] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.862 [2024-04-24 20:25:22.913380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.862 [2024-04-24 20:25:22.913395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:52.862 [2024-04-24 20:25:22.913406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:52.862 [2024-04-24 20:25:22.913420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.862 [2024-04-24 20:25:22.920899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.862 [2024-04-24 20:25:22.920934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:52.862 [2024-04-24 20:25:22.920947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.445 ms 00:18:52.862 [2024-04-24 20:25:22.920957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.862 [2024-04-24 20:25:22.921082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.862 [2024-04-24 20:25:22.921098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:52.862 [2024-04-24 20:25:22.921113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:18:52.862 [2024-04-24 20:25:22.921123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.862 [2024-04-24 20:25:22.921155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.862 [2024-04-24 20:25:22.921165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:52.862 [2024-04-24 20:25:22.921175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:52.862 [2024-04-24 20:25:22.921185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.862 [2024-04-24 20:25:22.921211] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:52.862 [2024-04-24 20:25:22.927319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.862 [2024-04-24 20:25:22.927355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:52.862 [2024-04-24 20:25:22.927368] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.126 ms 00:18:52.862 [2024-04-24 20:25:22.927378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.862 [2024-04-24 20:25:22.927455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.862 [2024-04-24 20:25:22.927472] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:52.862 [2024-04-24 20:25:22.927483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:52.862 [2024-04-24 20:25:22.927493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.862 [2024-04-24 20:25:22.927519] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:52.862 [2024-04-24 20:25:22.927541] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:52.862 [2024-04-24 20:25:22.927577] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:52.862 [2024-04-24 20:25:22.927595] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:52.862 [2024-04-24 20:25:22.927668] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:52.862 [2024-04-24 20:25:22.927682] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:52.862 [2024-04-24 20:25:22.927695] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:52.862 [2024-04-24 20:25:22.927709] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:52.862 [2024-04-24 20:25:22.927722] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:52.862 [2024-04-24 20:25:22.927733] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:52.862 [2024-04-24 20:25:22.927743] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:52.862 [2024-04-24 20:25:22.927753] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:52.862 [2024-04-24 20:25:22.927763] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:52.862 [2024-04-24 20:25:22.927774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.862 [2024-04-24 20:25:22.927785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:52.862 [2024-04-24 20:25:22.927799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:18:52.862 [2024-04-24 20:25:22.927814] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.862 [2024-04-24 20:25:22.927903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.862 [2024-04-24 20:25:22.927916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:52.862 [2024-04-24 20:25:22.927926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:52.862 [2024-04-24 20:25:22.927936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.862 [2024-04-24 20:25:22.928007] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:52.862 [2024-04-24 20:25:22.928019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:52.862 [2024-04-24 20:25:22.928029] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:52.862 [2024-04-24 20:25:22.928061] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.862 [2024-04-24 20:25:22.928071] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:52.862 [2024-04-24 20:25:22.928080] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:52.862 [2024-04-24 20:25:22.928090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:52.862 [2024-04-24 20:25:22.928100] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:52.862 [2024-04-24 20:25:22.928110] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:52.862 [2024-04-24 20:25:22.928119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:52.862 [2024-04-24 20:25:22.928139] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:52.862 [2024-04-24 20:25:22.928149] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:52.862 [2024-04-24 20:25:22.928158] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:52.862 [2024-04-24 20:25:22.928167] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:52.862 [2024-04-24 20:25:22.928177] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:52.862 [2024-04-24 20:25:22.928186] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.862 [2024-04-24 20:25:22.928195] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:52.862 [2024-04-24 20:25:22.928205] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:52.862 [2024-04-24 20:25:22.928214] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.862 [2024-04-24 20:25:22.928223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:52.862 [2024-04-24 20:25:22.928232] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:52.862 [2024-04-24 20:25:22.928241] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:52.862 [2024-04-24 20:25:22.928250] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:52.862 [2024-04-24 20:25:22.928259] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:52.862 [2024-04-24 20:25:22.928269] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:52.862 [2024-04-24 20:25:22.928278] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:52.862 [2024-04-24 20:25:22.928287] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:52.862 [2024-04-24 20:25:22.928296] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:52.862 [2024-04-24 20:25:22.928304] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:52.862 [2024-04-24 20:25:22.928314] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:52.862 [2024-04-24 20:25:22.928323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:52.862 [2024-04-24 20:25:22.928331] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:52.862 [2024-04-24 20:25:22.928340] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:52.862 [2024-04-24 20:25:22.928348] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:52.862 [2024-04-24 20:25:22.928357] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:52.862 [2024-04-24 20:25:22.928366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:52.862 [2024-04-24 20:25:22.928375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:52.862 [2024-04-24 20:25:22.928384] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:52.862 [2024-04-24 20:25:22.928393] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:52.862 [2024-04-24 20:25:22.928401] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:52.862 [2024-04-24 20:25:22.928410] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:52.862 [2024-04-24 20:25:22.928419] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:52.862 [2024-04-24 20:25:22.928429] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:52.862 [2024-04-24 20:25:22.928440] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:52.862 [2024-04-24 20:25:22.928450] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:52.862 [2024-04-24 20:25:22.928459] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:52.862 [2024-04-24 20:25:22.928469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:52.862 [2024-04-24 20:25:22.928477] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:52.862 [2024-04-24 20:25:22.928486] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:52.862 [2024-04-24 20:25:22.928495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:52.862 [2024-04-24 20:25:22.928505] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:52.862 [2024-04-24 20:25:22.928517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:52.862 [2024-04-24 20:25:22.928529] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:52.862 [2024-04-24 20:25:22.928539] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:52.862 [2024-04-24 20:25:22.928549] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:52.862 [2024-04-24 20:25:22.928559] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:52.862 [2024-04-24 20:25:22.928569] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:52.862 [2024-04-24 20:25:22.928580] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:52.862 [2024-04-24 20:25:22.928589] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:52.862 [2024-04-24 20:25:22.928599] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:52.862 [2024-04-24 20:25:22.928609] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:52.862 [2024-04-24 20:25:22.928619] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:52.862 [2024-04-24 20:25:22.928629] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:52.862 [2024-04-24 20:25:22.928639] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:52.862 [2024-04-24 20:25:22.928650] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:52.862 [2024-04-24 20:25:22.928659] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:52.862 [2024-04-24 20:25:22.928670] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:52.862 [2024-04-24 20:25:22.928682] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:52.862 [2024-04-24 20:25:22.928692] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:52.862 [2024-04-24 20:25:22.928702] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:52.863 [2024-04-24 20:25:22.928712] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:52.863 [2024-04-24 20:25:22.928722] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.863 [2024-04-24 20:25:22.928731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:52.863 [2024-04-24 20:25:22.928745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:18:52.863 [2024-04-24 20:25:22.928755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.863 [2024-04-24 20:25:22.954272] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.863 [2024-04-24 20:25:22.954339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:52.863 [2024-04-24 20:25:22.954355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.508 ms 00:18:52.863 [2024-04-24 20:25:22.954366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.863 [2024-04-24 20:25:22.954526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.863 [2024-04-24 20:25:22.954539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:52.863 [2024-04-24 20:25:22.954550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:52.863 [2024-04-24 20:25:22.954560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.863 [2024-04-24 20:25:23.022219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.863 [2024-04-24 20:25:23.022265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:52.863 [2024-04-24 20:25:23.022293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.742 ms 00:18:52.863 [2024-04-24 20:25:23.022304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.863 [2024-04-24 20:25:23.022424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.863 [2024-04-24 20:25:23.022437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:52.863 [2024-04-24 20:25:23.022449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:52.863 [2024-04-24 20:25:23.022460] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.863 [2024-04-24 20:25:23.022951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.863 [2024-04-24 20:25:23.022966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:52.863 [2024-04-24 20:25:23.022978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:18:52.863 [2024-04-24 20:25:23.022988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.863 [2024-04-24 20:25:23.023109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.863 [2024-04-24 20:25:23.023123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:52.863 [2024-04-24 20:25:23.023134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:18:52.863 [2024-04-24 20:25:23.023145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.863 [2024-04-24 20:25:23.047081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.863 [2024-04-24 20:25:23.047130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:52.863 [2024-04-24 20:25:23.047147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.951 ms 00:18:52.863 [2024-04-24 20:25:23.047158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:52.863 [2024-04-24 20:25:23.068070] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:52.863 [2024-04-24 20:25:23.068121] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:52.863 [2024-04-24 20:25:23.068154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:52.863 [2024-04-24 20:25:23.068165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:52.863 [2024-04-24 20:25:23.068195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.870 ms 00:18:52.863 [2024-04-24 20:25:23.068206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.122 [2024-04-24 20:25:23.101006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.122 [2024-04-24 20:25:23.101068] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:53.122 [2024-04-24 20:25:23.101085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.745 ms 00:18:53.122 [2024-04-24 20:25:23.101103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.122 [2024-04-24 20:25:23.122550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.122 [2024-04-24 20:25:23.122601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:53.122 [2024-04-24 20:25:23.122616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.334 ms 00:18:53.122 [2024-04-24 20:25:23.122627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.122 [2024-04-24 20:25:23.141264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.122 [2024-04-24 20:25:23.141327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:53.122 [2024-04-24 20:25:23.141343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.559 ms 00:18:53.122 [2024-04-24 20:25:23.141353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.122 [2024-04-24 20:25:23.141853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.122 [2024-04-24 20:25:23.141885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:53.122 [2024-04-24 20:25:23.141897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:18:53.122 [2024-04-24 20:25:23.141908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.122 [2024-04-24 20:25:23.235569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.122 [2024-04-24 20:25:23.235638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:53.122 [2024-04-24 20:25:23.235656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.785 ms 00:18:53.122 [2024-04-24 20:25:23.235667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.122 [2024-04-24 20:25:23.249641] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:53.122 [2024-04-24 20:25:23.266537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.122 [2024-04-24 20:25:23.266602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:53.122 [2024-04-24 20:25:23.266620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.771 ms 00:18:53.122 [2024-04-24 20:25:23.266630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.122 [2024-04-24 20:25:23.266745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.122 [2024-04-24 20:25:23.266766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:53.122 [2024-04-24 20:25:23.266778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:53.122 [2024-04-24 20:25:23.266789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.122 [2024-04-24 20:25:23.266875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.122 [2024-04-24 20:25:23.266912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:53.122 [2024-04-24 20:25:23.266929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:53.122 [2024-04-24 20:25:23.266955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.122 [2024-04-24 20:25:23.269370] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.122 [2024-04-24 20:25:23.269433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:53.122 [2024-04-24 20:25:23.269456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.390 ms 00:18:53.122 [2024-04-24 20:25:23.269474] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.122 [2024-04-24 20:25:23.269542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.122 [2024-04-24 20:25:23.269563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:53.122 [2024-04-24 20:25:23.269582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:53.122 [2024-04-24 20:25:23.269608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.122 [2024-04-24 20:25:23.269663] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:53.122 [2024-04-24 20:25:23.269683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.122 [2024-04-24 20:25:23.269700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:53.122 [2024-04-24 20:25:23.269718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:53.122 [2024-04-24 20:25:23.269735] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.122 [2024-04-24 20:25:23.311066] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.122 [2024-04-24 20:25:23.311133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:53.122 [2024-04-24 20:25:23.311160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.351 ms 00:18:53.122 [2024-04-24 20:25:23.311171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.122 [2024-04-24 20:25:23.311331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.122 [2024-04-24 20:25:23.311345] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:53.122 [2024-04-24 20:25:23.311358] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:53.122 [2024-04-24 20:25:23.311368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.122 [2024-04-24 20:25:23.312361] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:53.122 [2024-04-24 20:25:23.318549] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 427.246 ms, result 0 00:18:53.122 [2024-04-24 20:25:23.319288] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:53.122 [2024-04-24 20:25:23.338554] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:53.382  Copying: 4096/4096 [kB] (average 27 MBps)[2024-04-24 20:25:23.489379] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:53.382 [2024-04-24 20:25:23.504193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.382 [2024-04-24 20:25:23.504245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:53.382 [2024-04-24 20:25:23.504262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:53.382 [2024-04-24 20:25:23.504272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.382 [2024-04-24 20:25:23.504297] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:53.382 [2024-04-24 20:25:23.507931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.382 [2024-04-24 20:25:23.507958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:53.382 [2024-04-24 20:25:23.507988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.623 ms 00:18:53.382 [2024-04-24 20:25:23.507998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.382 [2024-04-24 20:25:23.509776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.382 [2024-04-24 20:25:23.509816] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:53.382 [2024-04-24 20:25:23.509830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:18:53.382 [2024-04-24 20:25:23.509841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.382 [2024-04-24 20:25:23.513305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.382 [2024-04-24 20:25:23.513348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:53.382 [2024-04-24 20:25:23.513360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.426 ms 00:18:53.382 [2024-04-24 20:25:23.513377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.382 [2024-04-24 20:25:23.519340] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.382 [2024-04-24 20:25:23.519377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:53.382 [2024-04-24 20:25:23.519389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.938 ms 00:18:53.382 [2024-04-24 20:25:23.519399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.382 [2024-04-24 20:25:23.558490] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.382 [2024-04-24 20:25:23.558554] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:53.382 [2024-04-24 20:25:23.558571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.085 ms 00:18:53.382 [2024-04-24 20:25:23.558581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.382 [2024-04-24 20:25:23.581820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.382 [2024-04-24 20:25:23.581898] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:53.382 [2024-04-24 20:25:23.581916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.185 ms 00:18:53.382 [2024-04-24 20:25:23.581927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.382 [2024-04-24 20:25:23.582121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.382 [2024-04-24 20:25:23.582148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:53.382 [2024-04-24 20:25:23.582160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:53.382 [2024-04-24 20:25:23.582171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.642 [2024-04-24 20:25:23.624363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.642 [2024-04-24 20:25:23.624646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:53.642 [2024-04-24 20:25:23.624786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.236 ms 00:18:53.642 [2024-04-24 20:25:23.624845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.642 [2024-04-24 20:25:23.662933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.642 [2024-04-24 20:25:23.663164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:53.642 [2024-04-24 20:25:23.663244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.992 ms 00:18:53.642 [2024-04-24 20:25:23.663280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.642 [2024-04-24 20:25:23.704657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.642 [2024-04-24 20:25:23.704907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:53.642 [2024-04-24 20:25:23.705012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.293 ms 00:18:53.642 [2024-04-24 20:25:23.705049] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.642 [2024-04-24 20:25:23.744710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.642 [2024-04-24 20:25:23.744778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:53.642 [2024-04-24 20:25:23.744796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.572 ms 00:18:53.642 [2024-04-24 20:25:23.744806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.642 [2024-04-24 20:25:23.744914] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:53.642 [2024-04-24 20:25:23.744935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.744948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.744960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.744971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.744983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.744994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:53.642 [2024-04-24 20:25:23.745352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.745994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.746004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:53.643 [2024-04-24 20:25:23.746022] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:53.643 [2024-04-24 20:25:23.746036] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 59cba280-e672-4cc5-80b9-6f15d46f3b14 00:18:53.643 [2024-04-24 20:25:23.746047] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:53.643 [2024-04-24 20:25:23.746057] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:53.643 [2024-04-24 20:25:23.746066] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:53.643 [2024-04-24 20:25:23.746077] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:53.643 [2024-04-24 20:25:23.746086] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:53.643 [2024-04-24 20:25:23.746097] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:53.643 [2024-04-24 20:25:23.746106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:53.643 [2024-04-24 20:25:23.746116] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:53.643 [2024-04-24 20:25:23.746125] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:53.643 [2024-04-24 20:25:23.746135] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.643 [2024-04-24 20:25:23.746145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:53.643 [2024-04-24 20:25:23.746155] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.225 ms 00:18:53.643 [2024-04-24 20:25:23.746165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.643 [2024-04-24 20:25:23.766892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.643 [2024-04-24 20:25:23.766960] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:53.643 [2024-04-24 20:25:23.766976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.727 ms 00:18:53.643 [2024-04-24 20:25:23.766986] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.643 [2024-04-24 20:25:23.767294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.643 [2024-04-24 20:25:23.767307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:53.643 [2024-04-24 20:25:23.767326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:18:53.643 [2024-04-24 20:25:23.767336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.643 [2024-04-24 20:25:23.826663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.643 [2024-04-24 20:25:23.826736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:53.643 [2024-04-24 20:25:23.826761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.643 [2024-04-24 20:25:23.826772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.643 [2024-04-24 20:25:23.826901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.643 [2024-04-24 20:25:23.826914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:53.643 [2024-04-24 20:25:23.826930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.643 [2024-04-24 20:25:23.826940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.643 [2024-04-24 20:25:23.826999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.643 [2024-04-24 20:25:23.827012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:53.643 [2024-04-24 20:25:23.827022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.643 [2024-04-24 20:25:23.827032] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.643 [2024-04-24 20:25:23.827051] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.643 [2024-04-24 20:25:23.827061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:53.643 [2024-04-24 20:25:23.827071] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.644 [2024-04-24 20:25:23.827081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.901 [2024-04-24 20:25:23.944747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.901 [2024-04-24 20:25:23.944817] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:53.901 [2024-04-24 20:25:23.944833] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.901 [2024-04-24 20:25:23.944844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.901 [2024-04-24 20:25:23.996104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.901 [2024-04-24 20:25:23.996177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:53.901 [2024-04-24 20:25:23.996195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.901 [2024-04-24 20:25:23.996217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.901 [2024-04-24 20:25:23.996297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.901 [2024-04-24 20:25:23.996310] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:53.901 [2024-04-24 20:25:23.996321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.901 [2024-04-24 20:25:23.996332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.901 [2024-04-24 20:25:23.996364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.901 [2024-04-24 20:25:23.996374] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:53.901 [2024-04-24 20:25:23.996385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.901 [2024-04-24 20:25:23.996395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.901 [2024-04-24 20:25:23.996507] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.901 [2024-04-24 20:25:23.996520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:53.901 [2024-04-24 20:25:23.996531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.901 [2024-04-24 20:25:23.996541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.901 [2024-04-24 20:25:23.996588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.901 [2024-04-24 20:25:23.996601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:53.901 [2024-04-24 20:25:23.996611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.901 [2024-04-24 20:25:23.996627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.901 [2024-04-24 20:25:23.996671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.901 [2024-04-24 20:25:23.996683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:53.901 [2024-04-24 20:25:23.996694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.901 [2024-04-24 20:25:23.996704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.901 [2024-04-24 20:25:23.996752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:53.901 [2024-04-24 20:25:23.996764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:53.901 [2024-04-24 20:25:23.996775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:53.901 [2024-04-24 20:25:23.996785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.901 [2024-04-24 20:25:23.996991] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 493.570 ms, result 0 00:18:55.277 00:18:55.277 00:18:55.277 20:25:25 -- ftl/trim.sh@93 -- # svcpid=78967 00:18:55.277 20:25:25 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:55.277 20:25:25 -- ftl/trim.sh@94 -- # waitforlisten 78967 00:18:55.277 20:25:25 -- common/autotest_common.sh@817 -- # '[' -z 78967 ']' 00:18:55.277 20:25:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.277 20:25:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:55.277 20:25:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.277 20:25:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:55.277 20:25:25 -- common/autotest_common.sh@10 -- # set +x 00:18:55.277 [2024-04-24 20:25:25.423612] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:18:55.277 [2024-04-24 20:25:25.423729] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78967 ] 00:18:55.535 [2024-04-24 20:25:25.596873] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.794 [2024-04-24 20:25:25.833402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.750 20:25:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:56.750 20:25:26 -- common/autotest_common.sh@850 -- # return 0 00:18:56.750 20:25:26 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:56.750 [2024-04-24 20:25:26.963773] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:56.750 [2024-04-24 20:25:26.963848] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:57.010 [2024-04-24 20:25:27.117153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.010 [2024-04-24 20:25:27.117217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:57.010 [2024-04-24 20:25:27.117241] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:57.010 [2024-04-24 20:25:27.117254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.010 [2024-04-24 20:25:27.120548] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.010 [2024-04-24 20:25:27.120597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:57.010 [2024-04-24 20:25:27.120616] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.273 ms 00:18:57.010 [2024-04-24 20:25:27.120631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.010 [2024-04-24 20:25:27.120757] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:57.010 [2024-04-24 20:25:27.121697] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:57.010 [2024-04-24 20:25:27.121738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.011 [2024-04-24 20:25:27.121755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:57.011 [2024-04-24 20:25:27.121771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:18:57.011 [2024-04-24 20:25:27.121783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.011 [2024-04-24 20:25:27.123457] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:57.011 [2024-04-24 20:25:27.142151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.011 [2024-04-24 20:25:27.142200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:57.011 [2024-04-24 20:25:27.142216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.728 ms 00:18:57.011 [2024-04-24 20:25:27.142229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.011 [2024-04-24 20:25:27.142333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.011 [2024-04-24 20:25:27.142352] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:57.011 [2024-04-24 20:25:27.142363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:57.011 [2024-04-24 20:25:27.142376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.011 [2024-04-24 20:25:27.149357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.011 [2024-04-24 20:25:27.149392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:57.011 [2024-04-24 20:25:27.149405] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.942 ms 00:18:57.011 [2024-04-24 20:25:27.149420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.011 [2024-04-24 20:25:27.149521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.011 [2024-04-24 20:25:27.149537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:57.011 [2024-04-24 20:25:27.149548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:18:57.011 [2024-04-24 20:25:27.149560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.011 [2024-04-24 20:25:27.149587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.011 [2024-04-24 20:25:27.149601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:57.011 [2024-04-24 20:25:27.149611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:57.011 [2024-04-24 20:25:27.149623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.011 [2024-04-24 20:25:27.149651] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:57.011 [2024-04-24 20:25:27.155366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.011 [2024-04-24 20:25:27.155454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:57.011 [2024-04-24 20:25:27.155474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.728 ms 00:18:57.011 [2024-04-24 20:25:27.155488] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.011 [2024-04-24 20:25:27.155571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.011 [2024-04-24 20:25:27.155587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:57.011 [2024-04-24 20:25:27.155606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:57.011 [2024-04-24 20:25:27.155620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.011 [2024-04-24 20:25:27.155650] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:57.011 [2024-04-24 20:25:27.155678] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:57.011 [2024-04-24 20:25:27.155722] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:57.011 [2024-04-24 20:25:27.155762] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:57.011 [2024-04-24 20:25:27.155886] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:57.011 [2024-04-24 20:25:27.155914] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:57.011 [2024-04-24 20:25:27.155950] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:57.011 [2024-04-24 20:25:27.155973] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:57.011 [2024-04-24 20:25:27.155998] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:57.011 [2024-04-24 20:25:27.156017] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:57.011 [2024-04-24 20:25:27.156032] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:57.011 [2024-04-24 20:25:27.156042] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:57.011 [2024-04-24 20:25:27.156055] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:57.011 [2024-04-24 20:25:27.156067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.011 [2024-04-24 20:25:27.156086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:57.011 [2024-04-24 20:25:27.156097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:18:57.011 [2024-04-24 20:25:27.156110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.011 [2024-04-24 20:25:27.156178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.011 [2024-04-24 20:25:27.156192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:57.011 [2024-04-24 20:25:27.156203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:57.011 [2024-04-24 20:25:27.156215] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.011 [2024-04-24 20:25:27.156288] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:57.011 [2024-04-24 20:25:27.156303] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:57.011 [2024-04-24 20:25:27.156316] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:57.011 [2024-04-24 20:25:27.156331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.011 [2024-04-24 20:25:27.156342] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:57.011 [2024-04-24 20:25:27.156372] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:57.011 [2024-04-24 20:25:27.156383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:57.011 [2024-04-24 20:25:27.156395] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:57.011 [2024-04-24 20:25:27.156405] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:57.011 [2024-04-24 20:25:27.156420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:57.011 [2024-04-24 20:25:27.156429] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:57.011 [2024-04-24 20:25:27.156442] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:57.011 [2024-04-24 20:25:27.156451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:57.011 [2024-04-24 20:25:27.156464] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:57.011 [2024-04-24 20:25:27.156473] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:18:57.011 [2024-04-24 20:25:27.156486] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.011 [2024-04-24 20:25:27.156496] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:57.011 [2024-04-24 20:25:27.156517] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:18:57.011 [2024-04-24 20:25:27.156526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.011 [2024-04-24 20:25:27.156550] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:57.011 [2024-04-24 20:25:27.156560] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:18:57.011 [2024-04-24 20:25:27.156572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:57.011 [2024-04-24 20:25:27.156582] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:57.011 [2024-04-24 20:25:27.156594] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:57.011 [2024-04-24 20:25:27.156603] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:57.011 [2024-04-24 20:25:27.156619] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:57.011 [2024-04-24 20:25:27.156628] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:18:57.011 [2024-04-24 20:25:27.156642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:57.011 [2024-04-24 20:25:27.156651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:57.011 [2024-04-24 20:25:27.156663] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:57.011 [2024-04-24 20:25:27.156672] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:57.011 [2024-04-24 20:25:27.156684] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:57.011 [2024-04-24 20:25:27.156694] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:18:57.011 [2024-04-24 20:25:27.156705] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:57.011 [2024-04-24 20:25:27.156715] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:57.011 [2024-04-24 20:25:27.156727] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:57.011 [2024-04-24 20:25:27.156736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:57.011 [2024-04-24 20:25:27.156748] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:57.011 [2024-04-24 20:25:27.156757] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:18:57.011 [2024-04-24 20:25:27.156769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:57.011 [2024-04-24 20:25:27.156778] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:57.011 [2024-04-24 20:25:27.156793] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:57.011 [2024-04-24 20:25:27.156803] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:57.011 [2024-04-24 20:25:27.156815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:57.011 [2024-04-24 20:25:27.156826] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:57.011 [2024-04-24 20:25:27.156838] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:57.011 [2024-04-24 20:25:27.156847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:57.011 [2024-04-24 20:25:27.156872] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:57.011 [2024-04-24 20:25:27.156883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:57.011 [2024-04-24 20:25:27.156895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:57.012 [2024-04-24 20:25:27.156906] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:57.012 [2024-04-24 20:25:27.156923] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:57.012 [2024-04-24 20:25:27.156935] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:57.012 [2024-04-24 20:25:27.156949] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:18:57.012 [2024-04-24 20:25:27.156960] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:18:57.012 [2024-04-24 20:25:27.156974] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:18:57.012 [2024-04-24 20:25:27.156985] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:18:57.012 [2024-04-24 20:25:27.157008] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:18:57.012 [2024-04-24 20:25:27.157027] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:18:57.012 [2024-04-24 20:25:27.157054] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:18:57.012 [2024-04-24 20:25:27.157065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:18:57.012 [2024-04-24 20:25:27.157078] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:18:57.012 [2024-04-24 20:25:27.157088] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:18:57.012 [2024-04-24 20:25:27.157100] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:18:57.012 [2024-04-24 20:25:27.157111] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:18:57.012 [2024-04-24 20:25:27.157123] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:57.012 [2024-04-24 20:25:27.157134] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:57.012 [2024-04-24 20:25:27.157150] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:57.012 [2024-04-24 20:25:27.157160] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:57.012 [2024-04-24 20:25:27.157173] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:57.012 [2024-04-24 20:25:27.157183] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:57.012 [2024-04-24 20:25:27.157195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.012 [2024-04-24 20:25:27.157215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:57.012 [2024-04-24 20:25:27.157238] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:18:57.012 [2024-04-24 20:25:27.157254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.012 [2024-04-24 20:25:27.181937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.012 [2024-04-24 20:25:27.181979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:57.012 [2024-04-24 20:25:27.181997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.648 ms 00:18:57.012 [2024-04-24 20:25:27.182016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.012 [2024-04-24 20:25:27.182157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.012 [2024-04-24 20:25:27.182169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:57.012 [2024-04-24 20:25:27.182182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:18:57.012 [2024-04-24 20:25:27.182192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.012 [2024-04-24 20:25:27.236177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.012 [2024-04-24 20:25:27.236224] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:57.012 [2024-04-24 20:25:27.236244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.047 ms 00:18:57.012 [2024-04-24 20:25:27.236254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.012 [2024-04-24 20:25:27.236353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.012 [2024-04-24 20:25:27.236365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:57.012 [2024-04-24 20:25:27.236381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:57.012 [2024-04-24 20:25:27.236391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.012 [2024-04-24 20:25:27.236824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.012 [2024-04-24 20:25:27.236837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:57.012 [2024-04-24 20:25:27.236850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:18:57.012 [2024-04-24 20:25:27.236879] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.012 [2024-04-24 20:25:27.236995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.012 [2024-04-24 20:25:27.237008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:57.012 [2024-04-24 20:25:27.237021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:18:57.012 [2024-04-24 20:25:27.237031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.272 [2024-04-24 20:25:27.261562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.272 [2024-04-24 20:25:27.261605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:57.272 [2024-04-24 20:25:27.261622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.545 ms 00:18:57.272 [2024-04-24 20:25:27.261633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.272 [2024-04-24 20:25:27.281122] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:57.272 [2024-04-24 20:25:27.281163] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:57.272 [2024-04-24 20:25:27.281183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.272 [2024-04-24 20:25:27.281194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:57.272 [2024-04-24 20:25:27.281208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.435 ms 00:18:57.272 [2024-04-24 20:25:27.281218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.272 [2024-04-24 20:25:27.313641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.272 [2024-04-24 20:25:27.313713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:57.272 [2024-04-24 20:25:27.313739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.381 ms 00:18:57.272 [2024-04-24 20:25:27.313753] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.272 [2024-04-24 20:25:27.333578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.272 [2024-04-24 20:25:27.333625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:57.272 [2024-04-24 20:25:27.333643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.694 ms 00:18:57.272 [2024-04-24 20:25:27.333654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.272 [2024-04-24 20:25:27.353710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.272 [2024-04-24 20:25:27.353754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:57.272 [2024-04-24 20:25:27.353771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.994 ms 00:18:57.272 [2024-04-24 20:25:27.353797] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.272 [2024-04-24 20:25:27.354351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.272 [2024-04-24 20:25:27.354381] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:57.272 [2024-04-24 20:25:27.354397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:18:57.272 [2024-04-24 20:25:27.354408] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.272 [2024-04-24 20:25:27.448925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.272 [2024-04-24 20:25:27.448988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:57.272 [2024-04-24 20:25:27.449008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.636 ms 00:18:57.272 [2024-04-24 20:25:27.449019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.272 [2024-04-24 20:25:27.461731] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:57.272 [2024-04-24 20:25:27.478454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.272 [2024-04-24 20:25:27.478513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:57.272 [2024-04-24 20:25:27.478530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.349 ms 00:18:57.272 [2024-04-24 20:25:27.478547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.272 [2024-04-24 20:25:27.478661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.272 [2024-04-24 20:25:27.478676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:57.272 [2024-04-24 20:25:27.478687] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:57.272 [2024-04-24 20:25:27.478706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.272 [2024-04-24 20:25:27.478755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.272 [2024-04-24 20:25:27.478776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:57.272 [2024-04-24 20:25:27.478787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:57.272 [2024-04-24 20:25:27.478799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.272 [2024-04-24 20:25:27.480880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.272 [2024-04-24 20:25:27.480928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:57.273 [2024-04-24 20:25:27.480940] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.063 ms 00:18:57.273 [2024-04-24 20:25:27.480953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.273 [2024-04-24 20:25:27.480985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.273 [2024-04-24 20:25:27.480999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:57.273 [2024-04-24 20:25:27.481010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:57.273 [2024-04-24 20:25:27.481022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.273 [2024-04-24 20:25:27.481060] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:57.273 [2024-04-24 20:25:27.481076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.273 [2024-04-24 20:25:27.481102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:57.273 [2024-04-24 20:25:27.481116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:57.273 [2024-04-24 20:25:27.481125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.532 [2024-04-24 20:25:27.520200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.532 [2024-04-24 20:25:27.520258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:57.532 [2024-04-24 20:25:27.520278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.109 ms 00:18:57.532 [2024-04-24 20:25:27.520290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.532 [2024-04-24 20:25:27.520434] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.532 [2024-04-24 20:25:27.520447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:57.532 [2024-04-24 20:25:27.520461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:57.532 [2024-04-24 20:25:27.520473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.532 [2024-04-24 20:25:27.521418] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:57.532 [2024-04-24 20:25:27.527179] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 404.630 ms, result 0 00:18:57.532 [2024-04-24 20:25:27.528687] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:57.532 Some configs were skipped because the RPC state that can call them passed over. 00:18:57.532 20:25:27 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:57.792 [2024-04-24 20:25:27.793516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.792 [2024-04-24 20:25:27.793606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:57.792 [2024-04-24 20:25:27.793633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.729 ms 00:18:57.792 [2024-04-24 20:25:27.793659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.792 [2024-04-24 20:25:27.793730] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 42.939 ms, result 0 00:18:57.792 true 00:18:57.792 20:25:27 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:58.050 [2024-04-24 20:25:28.073606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.050 [2024-04-24 20:25:28.073668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:18:58.050 [2024-04-24 20:25:28.073686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.201 ms 00:18:58.050 [2024-04-24 20:25:28.073697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.050 [2024-04-24 20:25:28.073741] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 40.344 ms, result 0 00:18:58.050 true 00:18:58.050 20:25:28 -- ftl/trim.sh@102 -- # killprocess 78967 00:18:58.050 20:25:28 -- common/autotest_common.sh@936 -- # '[' -z 78967 ']' 00:18:58.050 20:25:28 -- common/autotest_common.sh@940 -- # kill -0 78967 00:18:58.050 20:25:28 -- common/autotest_common.sh@941 -- # uname 00:18:58.050 20:25:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:58.050 20:25:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78967 00:18:58.050 20:25:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:58.050 killing process with pid 78967 00:18:58.050 20:25:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:58.050 20:25:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78967' 00:18:58.050 20:25:28 -- common/autotest_common.sh@955 -- # kill 78967 00:18:58.050 20:25:28 -- common/autotest_common.sh@960 -- # wait 78967 00:18:59.432 [2024-04-24 20:25:29.235315] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-04-24 20:25:29.235382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:59.432 [2024-04-24 20:25:29.235400] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:59.432 [2024-04-24 20:25:29.235413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-04-24 20:25:29.235438] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:59.432 [2024-04-24 20:25:29.238911] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-04-24 20:25:29.238945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:59.432 [2024-04-24 20:25:29.238961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.457 ms 00:18:59.432 [2024-04-24 20:25:29.238974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-04-24 20:25:29.239245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-04-24 20:25:29.239261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:59.432 [2024-04-24 20:25:29.239275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:18:59.432 [2024-04-24 20:25:29.239307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-04-24 20:25:29.242436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-04-24 20:25:29.242468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:59.432 [2024-04-24 20:25:29.242486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.111 ms 00:18:59.432 [2024-04-24 20:25:29.242496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-04-24 20:25:29.248200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-04-24 20:25:29.248233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:59.432 [2024-04-24 20:25:29.248250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.675 ms 00:18:59.432 [2024-04-24 20:25:29.248260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-04-24 20:25:29.264212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-04-24 20:25:29.264249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:59.432 [2024-04-24 20:25:29.264266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.916 ms 00:18:59.432 [2024-04-24 20:25:29.264276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-04-24 20:25:29.275165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-04-24 20:25:29.275207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:59.432 [2024-04-24 20:25:29.275224] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.840 ms 00:18:59.432 [2024-04-24 20:25:29.275235] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-04-24 20:25:29.275385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-04-24 20:25:29.275398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:59.432 [2024-04-24 20:25:29.275415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:18:59.432 [2024-04-24 20:25:29.275425] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-04-24 20:25:29.290870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-04-24 20:25:29.290906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:59.432 [2024-04-24 20:25:29.290922] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.444 ms 00:18:59.432 [2024-04-24 20:25:29.290932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-04-24 20:25:29.306709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-04-24 20:25:29.306753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:59.432 [2024-04-24 20:25:29.306780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.756 ms 00:18:59.432 [2024-04-24 20:25:29.306790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-04-24 20:25:29.321884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-04-24 20:25:29.321927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:59.432 [2024-04-24 20:25:29.321944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.068 ms 00:18:59.432 [2024-04-24 20:25:29.321953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-04-24 20:25:29.338201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.432 [2024-04-24 20:25:29.338253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:59.432 [2024-04-24 20:25:29.338270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.197 ms 00:18:59.432 [2024-04-24 20:25:29.338297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.432 [2024-04-24 20:25:29.338354] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:59.432 [2024-04-24 20:25:29.338374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:59.432 [2024-04-24 20:25:29.338391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:59.432 [2024-04-24 20:25:29.338403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:59.432 [2024-04-24 20:25:29.338418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:59.432 [2024-04-24 20:25:29.338430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:59.432 [2024-04-24 20:25:29.338445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:59.432 [2024-04-24 20:25:29.338456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:59.432 [2024-04-24 20:25:29.338474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:59.432 [2024-04-24 20:25:29.338486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:59.432 [2024-04-24 20:25:29.338510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:59.432 [2024-04-24 20:25:29.338521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:59.432 [2024-04-24 20:25:29.338535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:59.432 [2024-04-24 20:25:29.338546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:59.432 [2024-04-24 20:25:29.338563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:59.432 [2024-04-24 20:25:29.338574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.338991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:59.433 [2024-04-24 20:25:29.339746] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:59.433 [2024-04-24 20:25:29.339759] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 59cba280-e672-4cc5-80b9-6f15d46f3b14 00:18:59.433 [2024-04-24 20:25:29.339771] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:59.433 [2024-04-24 20:25:29.339786] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:59.434 [2024-04-24 20:25:29.339797] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:59.434 [2024-04-24 20:25:29.339811] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:59.434 [2024-04-24 20:25:29.339821] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:59.434 [2024-04-24 20:25:29.339839] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:59.434 [2024-04-24 20:25:29.339849] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:59.434 [2024-04-24 20:25:29.339861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:59.434 [2024-04-24 20:25:29.339880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:59.434 [2024-04-24 20:25:29.339894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.434 [2024-04-24 20:25:29.339905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:59.434 [2024-04-24 20:25:29.339919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.546 ms 00:18:59.434 [2024-04-24 20:25:29.339930] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.360600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.434 [2024-04-24 20:25:29.360651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:59.434 [2024-04-24 20:25:29.360669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.667 ms 00:18:59.434 [2024-04-24 20:25:29.360683] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.360999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:59.434 [2024-04-24 20:25:29.361012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:59.434 [2024-04-24 20:25:29.361025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:18:59.434 [2024-04-24 20:25:29.361036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.432157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.434 [2024-04-24 20:25:29.432213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:59.434 [2024-04-24 20:25:29.432236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.434 [2024-04-24 20:25:29.432246] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.432366] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.434 [2024-04-24 20:25:29.432378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:59.434 [2024-04-24 20:25:29.432391] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.434 [2024-04-24 20:25:29.432401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.432453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.434 [2024-04-24 20:25:29.432465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:59.434 [2024-04-24 20:25:29.432478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.434 [2024-04-24 20:25:29.432491] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.432516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.434 [2024-04-24 20:25:29.432526] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:59.434 [2024-04-24 20:25:29.432538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.434 [2024-04-24 20:25:29.432548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.586108] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.434 [2024-04-24 20:25:29.586194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:59.434 [2024-04-24 20:25:29.586240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.434 [2024-04-24 20:25:29.586266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.650641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.434 [2024-04-24 20:25:29.650715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:59.434 [2024-04-24 20:25:29.650735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.434 [2024-04-24 20:25:29.650746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.650902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.434 [2024-04-24 20:25:29.650917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:59.434 [2024-04-24 20:25:29.650931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.434 [2024-04-24 20:25:29.650942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.650985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.434 [2024-04-24 20:25:29.650997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:59.434 [2024-04-24 20:25:29.651010] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.434 [2024-04-24 20:25:29.651020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.651140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.434 [2024-04-24 20:25:29.651154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:59.434 [2024-04-24 20:25:29.651168] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.434 [2024-04-24 20:25:29.651179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.651222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.434 [2024-04-24 20:25:29.651237] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:59.434 [2024-04-24 20:25:29.651250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.434 [2024-04-24 20:25:29.651260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.651304] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.434 [2024-04-24 20:25:29.651316] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:59.434 [2024-04-24 20:25:29.651329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.434 [2024-04-24 20:25:29.651339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.651395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:59.434 [2024-04-24 20:25:29.651407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:59.434 [2024-04-24 20:25:29.651421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:59.434 [2024-04-24 20:25:29.651431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:59.434 [2024-04-24 20:25:29.651573] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 416.909 ms, result 0 00:19:00.837 20:25:30 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:01.096 [2024-04-24 20:25:31.088176] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:19:01.096 [2024-04-24 20:25:31.088298] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79044 ] 00:19:01.096 [2024-04-24 20:25:31.261387] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.355 [2024-04-24 20:25:31.498326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.924 [2024-04-24 20:25:31.907995] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:01.924 [2024-04-24 20:25:31.908090] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:01.924 [2024-04-24 20:25:32.062344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.924 [2024-04-24 20:25:32.062406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:01.924 [2024-04-24 20:25:32.062422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:01.924 [2024-04-24 20:25:32.062435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.924 [2024-04-24 20:25:32.065801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.924 [2024-04-24 20:25:32.065984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:01.924 [2024-04-24 20:25:32.066114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.349 ms 00:19:01.924 [2024-04-24 20:25:32.066157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.924 [2024-04-24 20:25:32.066314] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:01.924 [2024-04-24 20:25:32.067971] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:01.924 [2024-04-24 20:25:32.068127] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.924 [2024-04-24 20:25:32.068200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:01.924 [2024-04-24 20:25:32.068242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.825 ms 00:19:01.924 [2024-04-24 20:25:32.068271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.924 [2024-04-24 20:25:32.069938] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:01.924 [2024-04-24 20:25:32.091059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.924 [2024-04-24 20:25:32.091101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:01.924 [2024-04-24 20:25:32.091118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.155 ms 00:19:01.924 [2024-04-24 20:25:32.091130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.924 [2024-04-24 20:25:32.091234] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.924 [2024-04-24 20:25:32.091249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:01.924 [2024-04-24 20:25:32.091261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:01.924 [2024-04-24 20:25:32.091275] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.924 [2024-04-24 20:25:32.099343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.924 [2024-04-24 20:25:32.099385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:01.924 [2024-04-24 20:25:32.099401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.033 ms 00:19:01.925 [2024-04-24 20:25:32.099412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.925 [2024-04-24 20:25:32.099545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.925 [2024-04-24 20:25:32.099560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:01.925 [2024-04-24 20:25:32.099575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:19:01.925 [2024-04-24 20:25:32.099586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.925 [2024-04-24 20:25:32.099619] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.925 [2024-04-24 20:25:32.099632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:01.925 [2024-04-24 20:25:32.099642] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:01.925 [2024-04-24 20:25:32.099652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.925 [2024-04-24 20:25:32.099680] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:01.925 [2024-04-24 20:25:32.105388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.925 [2024-04-24 20:25:32.105422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:01.925 [2024-04-24 20:25:32.105435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.725 ms 00:19:01.925 [2024-04-24 20:25:32.105445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.925 [2024-04-24 20:25:32.105516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.925 [2024-04-24 20:25:32.105532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:01.925 [2024-04-24 20:25:32.105543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:01.925 [2024-04-24 20:25:32.105552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.925 [2024-04-24 20:25:32.105576] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:01.925 [2024-04-24 20:25:32.105598] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:01.925 [2024-04-24 20:25:32.105632] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:01.925 [2024-04-24 20:25:32.105651] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:01.925 [2024-04-24 20:25:32.105720] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:01.925 [2024-04-24 20:25:32.105733] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:01.925 [2024-04-24 20:25:32.105746] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:01.925 [2024-04-24 20:25:32.105759] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:01.925 [2024-04-24 20:25:32.105770] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:01.925 [2024-04-24 20:25:32.105781] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:01.925 [2024-04-24 20:25:32.105791] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:01.925 [2024-04-24 20:25:32.105800] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:01.925 [2024-04-24 20:25:32.105810] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:01.925 [2024-04-24 20:25:32.105821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.925 [2024-04-24 20:25:32.105831] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:01.925 [2024-04-24 20:25:32.105844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:19:01.925 [2024-04-24 20:25:32.105878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.925 [2024-04-24 20:25:32.105939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.925 [2024-04-24 20:25:32.105950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:01.925 [2024-04-24 20:25:32.105960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:01.925 [2024-04-24 20:25:32.105970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.925 [2024-04-24 20:25:32.106038] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:01.925 [2024-04-24 20:25:32.106050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:01.925 [2024-04-24 20:25:32.106060] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:01.925 [2024-04-24 20:25:32.106073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.925 [2024-04-24 20:25:32.106083] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:01.925 [2024-04-24 20:25:32.106093] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:01.925 [2024-04-24 20:25:32.106101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:01.925 [2024-04-24 20:25:32.106111] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:01.925 [2024-04-24 20:25:32.106121] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:01.925 [2024-04-24 20:25:32.106130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:01.925 [2024-04-24 20:25:32.106149] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:01.925 [2024-04-24 20:25:32.106160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:01.925 [2024-04-24 20:25:32.106169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:01.925 [2024-04-24 20:25:32.106178] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:01.925 [2024-04-24 20:25:32.106187] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:19:01.925 [2024-04-24 20:25:32.106196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.925 [2024-04-24 20:25:32.106205] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:01.925 [2024-04-24 20:25:32.106214] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:19:01.925 [2024-04-24 20:25:32.106223] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.925 [2024-04-24 20:25:32.106232] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:01.925 [2024-04-24 20:25:32.106241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:19:01.925 [2024-04-24 20:25:32.106250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:01.925 [2024-04-24 20:25:32.106264] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:01.925 [2024-04-24 20:25:32.106281] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:01.925 [2024-04-24 20:25:32.106294] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:01.925 [2024-04-24 20:25:32.106306] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:01.925 [2024-04-24 20:25:32.106318] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:19:01.925 [2024-04-24 20:25:32.106345] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:01.925 [2024-04-24 20:25:32.106358] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:01.925 [2024-04-24 20:25:32.106371] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:01.925 [2024-04-24 20:25:32.106383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:01.925 [2024-04-24 20:25:32.106395] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:01.925 [2024-04-24 20:25:32.106408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:19:01.925 [2024-04-24 20:25:32.106420] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:01.925 [2024-04-24 20:25:32.106432] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:01.925 [2024-04-24 20:25:32.106445] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:01.925 [2024-04-24 20:25:32.106458] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:01.925 [2024-04-24 20:25:32.106470] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:01.925 [2024-04-24 20:25:32.106482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:19:01.925 [2024-04-24 20:25:32.106494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:01.925 [2024-04-24 20:25:32.106506] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:01.925 [2024-04-24 20:25:32.106519] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:01.925 [2024-04-24 20:25:32.106532] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:01.925 [2024-04-24 20:25:32.106546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:01.925 [2024-04-24 20:25:32.106559] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:01.925 [2024-04-24 20:25:32.106572] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:01.925 [2024-04-24 20:25:32.106584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:01.925 [2024-04-24 20:25:32.106597] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:01.925 [2024-04-24 20:25:32.106609] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:01.925 [2024-04-24 20:25:32.106621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:01.925 [2024-04-24 20:25:32.106634] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:01.925 [2024-04-24 20:25:32.106650] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:01.925 [2024-04-24 20:25:32.106666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:01.925 [2024-04-24 20:25:32.106680] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:19:01.925 [2024-04-24 20:25:32.106694] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:19:01.925 [2024-04-24 20:25:32.106708] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:19:01.925 [2024-04-24 20:25:32.106722] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:19:01.925 [2024-04-24 20:25:32.106735] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:19:01.925 [2024-04-24 20:25:32.106749] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:19:01.925 [2024-04-24 20:25:32.106773] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:19:01.925 [2024-04-24 20:25:32.106787] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:19:01.926 [2024-04-24 20:25:32.106798] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:19:01.926 [2024-04-24 20:25:32.106809] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:19:01.926 [2024-04-24 20:25:32.106821] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:19:01.926 [2024-04-24 20:25:32.106832] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:19:01.926 [2024-04-24 20:25:32.106842] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:01.926 [2024-04-24 20:25:32.106869] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:01.926 [2024-04-24 20:25:32.106882] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:01.926 [2024-04-24 20:25:32.106892] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:01.926 [2024-04-24 20:25:32.106903] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:01.926 [2024-04-24 20:25:32.106914] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:01.926 [2024-04-24 20:25:32.106925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.926 [2024-04-24 20:25:32.106935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:01.926 [2024-04-24 20:25:32.106951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:19:01.926 [2024-04-24 20:25:32.106961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.926 [2024-04-24 20:25:32.131929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.926 [2024-04-24 20:25:32.131970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:01.926 [2024-04-24 20:25:32.131984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.953 ms 00:19:01.926 [2024-04-24 20:25:32.131995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.926 [2024-04-24 20:25:32.132123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.926 [2024-04-24 20:25:32.132136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:01.926 [2024-04-24 20:25:32.132147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:01.926 [2024-04-24 20:25:32.132158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.186 [2024-04-24 20:25:32.194897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.186 [2024-04-24 20:25:32.194955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:02.186 [2024-04-24 20:25:32.194971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.814 ms 00:19:02.186 [2024-04-24 20:25:32.194982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.186 [2024-04-24 20:25:32.195095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.186 [2024-04-24 20:25:32.195108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:02.186 [2024-04-24 20:25:32.195119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:02.186 [2024-04-24 20:25:32.195129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.186 [2024-04-24 20:25:32.195593] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.186 [2024-04-24 20:25:32.195611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:02.186 [2024-04-24 20:25:32.195622] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:19:02.186 [2024-04-24 20:25:32.195633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.186 [2024-04-24 20:25:32.195755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.186 [2024-04-24 20:25:32.195769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:02.186 [2024-04-24 20:25:32.195780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:19:02.186 [2024-04-24 20:25:32.195790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.186 [2024-04-24 20:25:32.218910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.186 [2024-04-24 20:25:32.218959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:02.186 [2024-04-24 20:25:32.218976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.131 ms 00:19:02.186 [2024-04-24 20:25:32.218987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.186 [2024-04-24 20:25:32.239018] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:02.186 [2024-04-24 20:25:32.239066] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:02.186 [2024-04-24 20:25:32.239086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.186 [2024-04-24 20:25:32.239097] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:02.186 [2024-04-24 20:25:32.239110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.970 ms 00:19:02.186 [2024-04-24 20:25:32.239121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.186 [2024-04-24 20:25:32.270895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.186 [2024-04-24 20:25:32.270952] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:02.186 [2024-04-24 20:25:32.270969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.721 ms 00:19:02.186 [2024-04-24 20:25:32.270987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.186 [2024-04-24 20:25:32.291971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-04-24 20:25:32.292019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:02.187 [2024-04-24 20:25:32.292034] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.869 ms 00:19:02.187 [2024-04-24 20:25:32.292044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-04-24 20:25:32.312459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-04-24 20:25:32.312515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:02.187 [2024-04-24 20:25:32.312531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.338 ms 00:19:02.187 [2024-04-24 20:25:32.312541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-04-24 20:25:32.313081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-04-24 20:25:32.313098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:02.187 [2024-04-24 20:25:32.313109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:19:02.187 [2024-04-24 20:25:32.313120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.187 [2024-04-24 20:25:32.406527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.187 [2024-04-24 20:25:32.406584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:02.187 [2024-04-24 20:25:32.406601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.530 ms 00:19:02.187 [2024-04-24 20:25:32.406612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.446 [2024-04-24 20:25:32.420415] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:02.446 [2024-04-24 20:25:32.437200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.446 [2024-04-24 20:25:32.437258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:02.446 [2024-04-24 20:25:32.437274] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.510 ms 00:19:02.446 [2024-04-24 20:25:32.437284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.446 [2024-04-24 20:25:32.437395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.446 [2024-04-24 20:25:32.437408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:02.446 [2024-04-24 20:25:32.437420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:02.446 [2024-04-24 20:25:32.437430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.446 [2024-04-24 20:25:32.437483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.446 [2024-04-24 20:25:32.437494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:02.446 [2024-04-24 20:25:32.437508] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:02.446 [2024-04-24 20:25:32.437518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.446 [2024-04-24 20:25:32.439680] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.446 [2024-04-24 20:25:32.439713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:02.446 [2024-04-24 20:25:32.439725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.145 ms 00:19:02.446 [2024-04-24 20:25:32.439736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.446 [2024-04-24 20:25:32.439772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.446 [2024-04-24 20:25:32.439784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:02.446 [2024-04-24 20:25:32.439796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:02.446 [2024-04-24 20:25:32.439812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.446 [2024-04-24 20:25:32.439859] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:02.446 [2024-04-24 20:25:32.439881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.446 [2024-04-24 20:25:32.439891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:02.446 [2024-04-24 20:25:32.439901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:02.446 [2024-04-24 20:25:32.439911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.446 [2024-04-24 20:25:32.479907] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.446 [2024-04-24 20:25:32.480090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:02.446 [2024-04-24 20:25:32.480253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.034 ms 00:19:02.446 [2024-04-24 20:25:32.480303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.446 [2024-04-24 20:25:32.480441] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.446 [2024-04-24 20:25:32.480525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:02.446 [2024-04-24 20:25:32.480561] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:02.446 [2024-04-24 20:25:32.480591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.446 [2024-04-24 20:25:32.481617] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:02.446 [2024-04-24 20:25:32.487256] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 419.650 ms, result 0 00:19:02.446 [2024-04-24 20:25:32.488080] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:02.446 [2024-04-24 20:25:32.505663] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:10.935  Copying: 32/256 [MB] (32 MBps) Copying: 63/256 [MB] (30 MBps) Copying: 94/256 [MB] (31 MBps) Copying: 125/256 [MB] (31 MBps) Copying: 157/256 [MB] (32 MBps) Copying: 188/256 [MB] (30 MBps) Copying: 216/256 [MB] (28 MBps) Copying: 249/256 [MB] (32 MBps) Copying: 256/256 [MB] (average 31 MBps)[2024-04-24 20:25:40.976927] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:10.935 [2024-04-24 20:25:41.000993] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.935 [2024-04-24 20:25:41.001062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:10.935 [2024-04-24 20:25:41.001079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:10.935 [2024-04-24 20:25:41.001089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.935 [2024-04-24 20:25:41.001117] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:10.935 [2024-04-24 20:25:41.004870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.935 [2024-04-24 20:25:41.004909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:10.935 [2024-04-24 20:25:41.004930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.739 ms 00:19:10.935 [2024-04-24 20:25:41.004940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.935 [2024-04-24 20:25:41.005214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.935 [2024-04-24 20:25:41.005231] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:10.935 [2024-04-24 20:25:41.005243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:19:10.935 [2024-04-24 20:25:41.005258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.935 [2024-04-24 20:25:41.008214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.935 [2024-04-24 20:25:41.008255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:10.935 [2024-04-24 20:25:41.008271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.935 ms 00:19:10.935 [2024-04-24 20:25:41.008284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.935 [2024-04-24 20:25:41.014131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.935 [2024-04-24 20:25:41.014172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:10.935 [2024-04-24 20:25:41.014184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.822 ms 00:19:10.935 [2024-04-24 20:25:41.014194] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.935 [2024-04-24 20:25:41.055512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.935 [2024-04-24 20:25:41.055611] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:10.935 [2024-04-24 20:25:41.055630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.308 ms 00:19:10.935 [2024-04-24 20:25:41.055653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.935 [2024-04-24 20:25:41.078697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.935 [2024-04-24 20:25:41.078790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:10.935 [2024-04-24 20:25:41.078810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.999 ms 00:19:10.935 [2024-04-24 20:25:41.078821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.935 [2024-04-24 20:25:41.079034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.935 [2024-04-24 20:25:41.079070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:10.935 [2024-04-24 20:25:41.079082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:10.935 [2024-04-24 20:25:41.079093] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.935 [2024-04-24 20:25:41.122264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.935 [2024-04-24 20:25:41.122325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:10.935 [2024-04-24 20:25:41.122341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.215 ms 00:19:10.935 [2024-04-24 20:25:41.122352] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.935 [2024-04-24 20:25:41.161209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.935 [2024-04-24 20:25:41.161276] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:10.935 [2024-04-24 20:25:41.161294] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.837 ms 00:19:10.935 [2024-04-24 20:25:41.161304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.196 [2024-04-24 20:25:41.200878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.196 [2024-04-24 20:25:41.200943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:11.196 [2024-04-24 20:25:41.200960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.564 ms 00:19:11.196 [2024-04-24 20:25:41.200970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.196 [2024-04-24 20:25:41.239995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.196 [2024-04-24 20:25:41.240095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:11.196 [2024-04-24 20:25:41.240114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.992 ms 00:19:11.196 [2024-04-24 20:25:41.240125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.196 [2024-04-24 20:25:41.240190] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:11.196 [2024-04-24 20:25:41.240210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.240997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:11.197 [2024-04-24 20:25:41.241596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:11.198 [2024-04-24 20:25:41.241610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:11.198 [2024-04-24 20:25:41.241650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:11.198 [2024-04-24 20:25:41.241670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:11.198 [2024-04-24 20:25:41.241692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:11.198 [2024-04-24 20:25:41.241711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:11.198 [2024-04-24 20:25:41.241732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:11.198 [2024-04-24 20:25:41.241751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:11.198 [2024-04-24 20:25:41.241771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:11.198 [2024-04-24 20:25:41.241790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:11.198 [2024-04-24 20:25:41.241818] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:11.198 [2024-04-24 20:25:41.241844] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 59cba280-e672-4cc5-80b9-6f15d46f3b14 00:19:11.198 [2024-04-24 20:25:41.241859] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:11.198 [2024-04-24 20:25:41.241899] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:11.198 [2024-04-24 20:25:41.241919] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:11.198 [2024-04-24 20:25:41.241939] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:11.198 [2024-04-24 20:25:41.241956] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:11.198 [2024-04-24 20:25:41.241975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:11.198 [2024-04-24 20:25:41.241998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:11.198 [2024-04-24 20:25:41.242015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:11.198 [2024-04-24 20:25:41.242043] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:11.198 [2024-04-24 20:25:41.242063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.198 [2024-04-24 20:25:41.242081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:11.198 [2024-04-24 20:25:41.242095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.877 ms 00:19:11.198 [2024-04-24 20:25:41.242108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.198 [2024-04-24 20:25:41.260800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.198 [2024-04-24 20:25:41.260849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:11.198 [2024-04-24 20:25:41.260877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.676 ms 00:19:11.198 [2024-04-24 20:25:41.260888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.198 [2024-04-24 20:25:41.261169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.198 [2024-04-24 20:25:41.261181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:11.198 [2024-04-24 20:25:41.261192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:19:11.198 [2024-04-24 20:25:41.261225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.198 [2024-04-24 20:25:41.319913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.198 [2024-04-24 20:25:41.319980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:11.198 [2024-04-24 20:25:41.319996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.198 [2024-04-24 20:25:41.320007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.198 [2024-04-24 20:25:41.320116] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.198 [2024-04-24 20:25:41.320128] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:11.198 [2024-04-24 20:25:41.320138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.198 [2024-04-24 20:25:41.320153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.198 [2024-04-24 20:25:41.320207] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.198 [2024-04-24 20:25:41.320220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:11.198 [2024-04-24 20:25:41.320230] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.198 [2024-04-24 20:25:41.320239] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.198 [2024-04-24 20:25:41.320259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.198 [2024-04-24 20:25:41.320269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:11.198 [2024-04-24 20:25:41.320279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.198 [2024-04-24 20:25:41.320288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.457 [2024-04-24 20:25:41.439930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.457 [2024-04-24 20:25:41.439995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:11.457 [2024-04-24 20:25:41.440011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.457 [2024-04-24 20:25:41.440021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.457 [2024-04-24 20:25:41.487757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.457 [2024-04-24 20:25:41.487826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:11.457 [2024-04-24 20:25:41.487859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.457 [2024-04-24 20:25:41.487900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.457 [2024-04-24 20:25:41.487994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.457 [2024-04-24 20:25:41.488005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:11.457 [2024-04-24 20:25:41.488016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.457 [2024-04-24 20:25:41.488026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.457 [2024-04-24 20:25:41.488055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.457 [2024-04-24 20:25:41.488065] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:11.457 [2024-04-24 20:25:41.488075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.457 [2024-04-24 20:25:41.488084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.457 [2024-04-24 20:25:41.488196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.457 [2024-04-24 20:25:41.488209] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:11.457 [2024-04-24 20:25:41.488220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.457 [2024-04-24 20:25:41.488230] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.457 [2024-04-24 20:25:41.488289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.457 [2024-04-24 20:25:41.488309] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:11.457 [2024-04-24 20:25:41.488323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.457 [2024-04-24 20:25:41.488339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.457 [2024-04-24 20:25:41.488395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.457 [2024-04-24 20:25:41.488410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:11.457 [2024-04-24 20:25:41.488421] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.457 [2024-04-24 20:25:41.488430] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.457 [2024-04-24 20:25:41.488494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.457 [2024-04-24 20:25:41.488513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:11.458 [2024-04-24 20:25:41.488530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.458 [2024-04-24 20:25:41.488545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.458 [2024-04-24 20:25:41.488749] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 488.535 ms, result 0 00:19:12.835 00:19:12.835 00:19:12.835 20:25:42 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:13.094 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:13.094 20:25:43 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:13.094 20:25:43 -- ftl/trim.sh@109 -- # fio_kill 00:19:13.094 20:25:43 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:13.094 20:25:43 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:13.094 20:25:43 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:13.353 20:25:43 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:13.353 20:25:43 -- ftl/trim.sh@20 -- # killprocess 78967 00:19:13.353 20:25:43 -- common/autotest_common.sh@936 -- # '[' -z 78967 ']' 00:19:13.353 20:25:43 -- common/autotest_common.sh@940 -- # kill -0 78967 00:19:13.353 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (78967) - No such process 00:19:13.353 Process with pid 78967 is not found 00:19:13.353 20:25:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 78967 is not found' 00:19:13.353 00:19:13.353 real 1m9.285s 00:19:13.353 user 1m37.690s 00:19:13.353 sys 0m6.678s 00:19:13.353 20:25:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:13.353 20:25:43 -- common/autotest_common.sh@10 -- # set +x 00:19:13.353 ************************************ 00:19:13.353 END TEST ftl_trim 00:19:13.353 ************************************ 00:19:13.353 20:25:43 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:13.353 20:25:43 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:19:13.353 20:25:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:13.353 20:25:43 -- common/autotest_common.sh@10 -- # set +x 00:19:13.353 ************************************ 00:19:13.353 START TEST ftl_restore 00:19:13.353 ************************************ 00:19:13.353 20:25:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:13.612 * Looking for test storage... 00:19:13.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:13.612 20:25:43 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:13.612 20:25:43 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:13.612 20:25:43 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:13.612 20:25:43 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:13.612 20:25:43 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:13.612 20:25:43 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:13.612 20:25:43 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:13.612 20:25:43 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:13.612 20:25:43 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:13.612 20:25:43 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.612 20:25:43 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.612 20:25:43 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:13.612 20:25:43 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:13.612 20:25:43 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:13.612 20:25:43 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:13.612 20:25:43 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:13.612 20:25:43 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:13.612 20:25:43 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.612 20:25:43 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.612 20:25:43 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:13.613 20:25:43 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:13.613 20:25:43 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:13.613 20:25:43 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:13.613 20:25:43 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:13.613 20:25:43 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:13.613 20:25:43 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:13.613 20:25:43 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:13.613 20:25:43 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.613 20:25:43 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:13.613 20:25:43 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:13.613 20:25:43 -- ftl/restore.sh@13 -- # mktemp -d 00:19:13.613 20:25:43 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.zqh7UnHE4N 00:19:13.613 20:25:43 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:13.613 20:25:43 -- ftl/restore.sh@16 -- # case $opt in 00:19:13.613 20:25:43 -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:13.613 20:25:43 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:13.613 20:25:43 -- ftl/restore.sh@23 -- # shift 2 00:19:13.613 20:25:43 -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:13.613 20:25:43 -- ftl/restore.sh@25 -- # timeout=240 00:19:13.613 20:25:43 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:13.613 20:25:43 -- ftl/restore.sh@39 -- # svcpid=79236 00:19:13.613 20:25:43 -- ftl/restore.sh@41 -- # waitforlisten 79236 00:19:13.613 20:25:43 -- common/autotest_common.sh@817 -- # '[' -z 79236 ']' 00:19:13.613 20:25:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.613 20:25:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:13.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.613 20:25:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.613 20:25:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:13.613 20:25:43 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:13.613 20:25:43 -- common/autotest_common.sh@10 -- # set +x 00:19:13.872 [2024-04-24 20:25:43.846295] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:19:13.872 [2024-04-24 20:25:43.846439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79236 ] 00:19:13.872 [2024-04-24 20:25:44.019182] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.130 [2024-04-24 20:25:44.274655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.065 20:25:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:15.065 20:25:45 -- common/autotest_common.sh@850 -- # return 0 00:19:15.066 20:25:45 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:15.066 20:25:45 -- ftl/common.sh@54 -- # local name=nvme0 00:19:15.066 20:25:45 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:15.066 20:25:45 -- ftl/common.sh@56 -- # local size=103424 00:19:15.066 20:25:45 -- ftl/common.sh@59 -- # local base_bdev 00:19:15.066 20:25:45 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:15.325 20:25:45 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:15.325 20:25:45 -- ftl/common.sh@62 -- # local base_size 00:19:15.325 20:25:45 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:15.325 20:25:45 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:19:15.325 20:25:45 -- common/autotest_common.sh@1365 -- # local bdev_info 00:19:15.584 20:25:45 -- common/autotest_common.sh@1366 -- # local bs 00:19:15.584 20:25:45 -- common/autotest_common.sh@1367 -- # local nb 00:19:15.584 20:25:45 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:15.584 20:25:45 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:19:15.584 { 00:19:15.584 "name": "nvme0n1", 00:19:15.584 "aliases": [ 00:19:15.584 "4d2a8bf4-eb1c-41dc-8b89-b835596985bc" 00:19:15.584 ], 00:19:15.584 "product_name": "NVMe disk", 00:19:15.584 "block_size": 4096, 00:19:15.584 "num_blocks": 1310720, 00:19:15.584 "uuid": "4d2a8bf4-eb1c-41dc-8b89-b835596985bc", 00:19:15.584 "assigned_rate_limits": { 00:19:15.584 "rw_ios_per_sec": 0, 00:19:15.584 "rw_mbytes_per_sec": 0, 00:19:15.584 "r_mbytes_per_sec": 0, 00:19:15.584 "w_mbytes_per_sec": 0 00:19:15.584 }, 00:19:15.584 "claimed": true, 00:19:15.584 "claim_type": "read_many_write_one", 00:19:15.584 "zoned": false, 00:19:15.584 "supported_io_types": { 00:19:15.584 "read": true, 00:19:15.584 "write": true, 00:19:15.584 "unmap": true, 00:19:15.584 "write_zeroes": true, 00:19:15.584 "flush": true, 00:19:15.584 "reset": true, 00:19:15.584 "compare": true, 00:19:15.584 "compare_and_write": false, 00:19:15.584 "abort": true, 00:19:15.584 "nvme_admin": true, 00:19:15.584 "nvme_io": true 00:19:15.584 }, 00:19:15.584 "driver_specific": { 00:19:15.584 "nvme": [ 00:19:15.584 { 00:19:15.584 "pci_address": "0000:00:11.0", 00:19:15.584 "trid": { 00:19:15.584 "trtype": "PCIe", 00:19:15.584 "traddr": "0000:00:11.0" 00:19:15.584 }, 00:19:15.584 "ctrlr_data": { 00:19:15.584 "cntlid": 0, 00:19:15.584 "vendor_id": "0x1b36", 00:19:15.584 "model_number": "QEMU NVMe Ctrl", 00:19:15.584 "serial_number": "12341", 00:19:15.584 "firmware_revision": "8.0.0", 00:19:15.584 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:15.584 "oacs": { 00:19:15.584 "security": 0, 00:19:15.584 "format": 1, 00:19:15.584 "firmware": 0, 00:19:15.584 "ns_manage": 1 00:19:15.584 }, 00:19:15.584 "multi_ctrlr": false, 00:19:15.584 "ana_reporting": false 00:19:15.584 }, 00:19:15.584 "vs": { 00:19:15.584 "nvme_version": "1.4" 00:19:15.584 }, 00:19:15.584 "ns_data": { 00:19:15.584 "id": 1, 00:19:15.584 "can_share": false 00:19:15.584 } 00:19:15.584 } 00:19:15.584 ], 00:19:15.584 "mp_policy": "active_passive" 00:19:15.584 } 00:19:15.584 } 00:19:15.584 ]' 00:19:15.584 20:25:45 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:19:15.584 20:25:45 -- common/autotest_common.sh@1369 -- # bs=4096 00:19:15.584 20:25:45 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:19:15.843 20:25:45 -- common/autotest_common.sh@1370 -- # nb=1310720 00:19:15.843 20:25:45 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:19:15.843 20:25:45 -- common/autotest_common.sh@1374 -- # echo 5120 00:19:15.843 20:25:45 -- ftl/common.sh@63 -- # base_size=5120 00:19:15.843 20:25:45 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:15.843 20:25:45 -- ftl/common.sh@67 -- # clear_lvols 00:19:15.843 20:25:45 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:15.843 20:25:45 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:15.843 20:25:46 -- ftl/common.sh@28 -- # stores=5758f976-9653-4916-99dd-b7bf5944f1ab 00:19:15.843 20:25:46 -- ftl/common.sh@29 -- # for lvs in $stores 00:19:15.843 20:25:46 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5758f976-9653-4916-99dd-b7bf5944f1ab 00:19:16.103 20:25:46 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:16.362 20:25:46 -- ftl/common.sh@68 -- # lvs=54bca998-ed92-4533-8a76-7545a9cb1975 00:19:16.362 20:25:46 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 54bca998-ed92-4533-8a76-7545a9cb1975 00:19:16.621 20:25:46 -- ftl/restore.sh@43 -- # split_bdev=c6bfd4cb-da5d-4a21-8eec-46426382ae14 00:19:16.621 20:25:46 -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:19:16.621 20:25:46 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c6bfd4cb-da5d-4a21-8eec-46426382ae14 00:19:16.621 20:25:46 -- ftl/common.sh@35 -- # local name=nvc0 00:19:16.621 20:25:46 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:16.621 20:25:46 -- ftl/common.sh@37 -- # local base_bdev=c6bfd4cb-da5d-4a21-8eec-46426382ae14 00:19:16.621 20:25:46 -- ftl/common.sh@38 -- # local cache_size= 00:19:16.621 20:25:46 -- ftl/common.sh@41 -- # get_bdev_size c6bfd4cb-da5d-4a21-8eec-46426382ae14 00:19:16.621 20:25:46 -- common/autotest_common.sh@1364 -- # local bdev_name=c6bfd4cb-da5d-4a21-8eec-46426382ae14 00:19:16.621 20:25:46 -- common/autotest_common.sh@1365 -- # local bdev_info 00:19:16.621 20:25:46 -- common/autotest_common.sh@1366 -- # local bs 00:19:16.621 20:25:46 -- common/autotest_common.sh@1367 -- # local nb 00:19:16.621 20:25:46 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c6bfd4cb-da5d-4a21-8eec-46426382ae14 00:19:16.881 20:25:46 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:19:16.881 { 00:19:16.881 "name": "c6bfd4cb-da5d-4a21-8eec-46426382ae14", 00:19:16.881 "aliases": [ 00:19:16.881 "lvs/nvme0n1p0" 00:19:16.881 ], 00:19:16.881 "product_name": "Logical Volume", 00:19:16.881 "block_size": 4096, 00:19:16.881 "num_blocks": 26476544, 00:19:16.881 "uuid": "c6bfd4cb-da5d-4a21-8eec-46426382ae14", 00:19:16.881 "assigned_rate_limits": { 00:19:16.881 "rw_ios_per_sec": 0, 00:19:16.881 "rw_mbytes_per_sec": 0, 00:19:16.881 "r_mbytes_per_sec": 0, 00:19:16.881 "w_mbytes_per_sec": 0 00:19:16.881 }, 00:19:16.881 "claimed": false, 00:19:16.881 "zoned": false, 00:19:16.881 "supported_io_types": { 00:19:16.881 "read": true, 00:19:16.881 "write": true, 00:19:16.881 "unmap": true, 00:19:16.881 "write_zeroes": true, 00:19:16.881 "flush": false, 00:19:16.881 "reset": true, 00:19:16.881 "compare": false, 00:19:16.881 "compare_and_write": false, 00:19:16.881 "abort": false, 00:19:16.881 "nvme_admin": false, 00:19:16.881 "nvme_io": false 00:19:16.881 }, 00:19:16.881 "driver_specific": { 00:19:16.881 "lvol": { 00:19:16.881 "lvol_store_uuid": "54bca998-ed92-4533-8a76-7545a9cb1975", 00:19:16.881 "base_bdev": "nvme0n1", 00:19:16.881 "thin_provision": true, 00:19:16.881 "snapshot": false, 00:19:16.881 "clone": false, 00:19:16.881 "esnap_clone": false 00:19:16.881 } 00:19:16.881 } 00:19:16.881 } 00:19:16.881 ]' 00:19:16.881 20:25:46 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:19:16.881 20:25:46 -- common/autotest_common.sh@1369 -- # bs=4096 00:19:16.881 20:25:46 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:19:16.881 20:25:46 -- common/autotest_common.sh@1370 -- # nb=26476544 00:19:16.881 20:25:46 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:19:16.881 20:25:46 -- common/autotest_common.sh@1374 -- # echo 103424 00:19:16.881 20:25:46 -- ftl/common.sh@41 -- # local base_size=5171 00:19:16.881 20:25:46 -- ftl/common.sh@44 -- # local nvc_bdev 00:19:16.881 20:25:46 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:17.140 20:25:47 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:17.140 20:25:47 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:17.140 20:25:47 -- ftl/common.sh@48 -- # get_bdev_size c6bfd4cb-da5d-4a21-8eec-46426382ae14 00:19:17.140 20:25:47 -- common/autotest_common.sh@1364 -- # local bdev_name=c6bfd4cb-da5d-4a21-8eec-46426382ae14 00:19:17.140 20:25:47 -- common/autotest_common.sh@1365 -- # local bdev_info 00:19:17.140 20:25:47 -- common/autotest_common.sh@1366 -- # local bs 00:19:17.140 20:25:47 -- common/autotest_common.sh@1367 -- # local nb 00:19:17.140 20:25:47 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c6bfd4cb-da5d-4a21-8eec-46426382ae14 00:19:17.398 20:25:47 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:19:17.398 { 00:19:17.398 "name": "c6bfd4cb-da5d-4a21-8eec-46426382ae14", 00:19:17.398 "aliases": [ 00:19:17.398 "lvs/nvme0n1p0" 00:19:17.398 ], 00:19:17.398 "product_name": "Logical Volume", 00:19:17.398 "block_size": 4096, 00:19:17.398 "num_blocks": 26476544, 00:19:17.398 "uuid": "c6bfd4cb-da5d-4a21-8eec-46426382ae14", 00:19:17.398 "assigned_rate_limits": { 00:19:17.398 "rw_ios_per_sec": 0, 00:19:17.398 "rw_mbytes_per_sec": 0, 00:19:17.398 "r_mbytes_per_sec": 0, 00:19:17.398 "w_mbytes_per_sec": 0 00:19:17.398 }, 00:19:17.398 "claimed": false, 00:19:17.398 "zoned": false, 00:19:17.398 "supported_io_types": { 00:19:17.398 "read": true, 00:19:17.398 "write": true, 00:19:17.398 "unmap": true, 00:19:17.398 "write_zeroes": true, 00:19:17.398 "flush": false, 00:19:17.398 "reset": true, 00:19:17.398 "compare": false, 00:19:17.398 "compare_and_write": false, 00:19:17.398 "abort": false, 00:19:17.398 "nvme_admin": false, 00:19:17.398 "nvme_io": false 00:19:17.398 }, 00:19:17.398 "driver_specific": { 00:19:17.398 "lvol": { 00:19:17.398 "lvol_store_uuid": "54bca998-ed92-4533-8a76-7545a9cb1975", 00:19:17.398 "base_bdev": "nvme0n1", 00:19:17.398 "thin_provision": true, 00:19:17.398 "snapshot": false, 00:19:17.398 "clone": false, 00:19:17.398 "esnap_clone": false 00:19:17.398 } 00:19:17.398 } 00:19:17.398 } 00:19:17.398 ]' 00:19:17.398 20:25:47 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:19:17.398 20:25:47 -- common/autotest_common.sh@1369 -- # bs=4096 00:19:17.398 20:25:47 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:19:17.398 20:25:47 -- common/autotest_common.sh@1370 -- # nb=26476544 00:19:17.398 20:25:47 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:19:17.398 20:25:47 -- common/autotest_common.sh@1374 -- # echo 103424 00:19:17.398 20:25:47 -- ftl/common.sh@48 -- # cache_size=5171 00:19:17.398 20:25:47 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:17.656 20:25:47 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:19:17.656 20:25:47 -- ftl/restore.sh@48 -- # get_bdev_size c6bfd4cb-da5d-4a21-8eec-46426382ae14 00:19:17.656 20:25:47 -- common/autotest_common.sh@1364 -- # local bdev_name=c6bfd4cb-da5d-4a21-8eec-46426382ae14 00:19:17.656 20:25:47 -- common/autotest_common.sh@1365 -- # local bdev_info 00:19:17.656 20:25:47 -- common/autotest_common.sh@1366 -- # local bs 00:19:17.656 20:25:47 -- common/autotest_common.sh@1367 -- # local nb 00:19:17.656 20:25:47 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c6bfd4cb-da5d-4a21-8eec-46426382ae14 00:19:17.656 20:25:47 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:19:17.656 { 00:19:17.656 "name": "c6bfd4cb-da5d-4a21-8eec-46426382ae14", 00:19:17.656 "aliases": [ 00:19:17.656 "lvs/nvme0n1p0" 00:19:17.656 ], 00:19:17.656 "product_name": "Logical Volume", 00:19:17.656 "block_size": 4096, 00:19:17.656 "num_blocks": 26476544, 00:19:17.656 "uuid": "c6bfd4cb-da5d-4a21-8eec-46426382ae14", 00:19:17.656 "assigned_rate_limits": { 00:19:17.656 "rw_ios_per_sec": 0, 00:19:17.656 "rw_mbytes_per_sec": 0, 00:19:17.656 "r_mbytes_per_sec": 0, 00:19:17.656 "w_mbytes_per_sec": 0 00:19:17.656 }, 00:19:17.656 "claimed": false, 00:19:17.656 "zoned": false, 00:19:17.656 "supported_io_types": { 00:19:17.656 "read": true, 00:19:17.656 "write": true, 00:19:17.656 "unmap": true, 00:19:17.656 "write_zeroes": true, 00:19:17.656 "flush": false, 00:19:17.657 "reset": true, 00:19:17.657 "compare": false, 00:19:17.657 "compare_and_write": false, 00:19:17.657 "abort": false, 00:19:17.657 "nvme_admin": false, 00:19:17.657 "nvme_io": false 00:19:17.657 }, 00:19:17.657 "driver_specific": { 00:19:17.657 "lvol": { 00:19:17.657 "lvol_store_uuid": "54bca998-ed92-4533-8a76-7545a9cb1975", 00:19:17.657 "base_bdev": "nvme0n1", 00:19:17.657 "thin_provision": true, 00:19:17.657 "snapshot": false, 00:19:17.657 "clone": false, 00:19:17.657 "esnap_clone": false 00:19:17.657 } 00:19:17.657 } 00:19:17.657 } 00:19:17.657 ]' 00:19:17.657 20:25:47 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:19:17.966 20:25:47 -- common/autotest_common.sh@1369 -- # bs=4096 00:19:17.966 20:25:47 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:19:17.967 20:25:47 -- common/autotest_common.sh@1370 -- # nb=26476544 00:19:17.967 20:25:47 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:19:17.967 20:25:47 -- common/autotest_common.sh@1374 -- # echo 103424 00:19:17.967 20:25:47 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:19:17.967 20:25:47 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c6bfd4cb-da5d-4a21-8eec-46426382ae14 --l2p_dram_limit 10' 00:19:17.967 20:25:47 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:19:17.967 20:25:47 -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:19:17.967 20:25:47 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:19:17.967 20:25:47 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:19:17.967 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:19:17.967 20:25:47 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c6bfd4cb-da5d-4a21-8eec-46426382ae14 --l2p_dram_limit 10 -c nvc0n1p0 00:19:17.967 [2024-04-24 20:25:48.145937] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.967 [2024-04-24 20:25:48.145998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:17.967 [2024-04-24 20:25:48.146019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:17.967 [2024-04-24 20:25:48.146031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.967 [2024-04-24 20:25:48.146095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.967 [2024-04-24 20:25:48.146107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.967 [2024-04-24 20:25:48.146124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:17.967 [2024-04-24 20:25:48.146134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.967 [2024-04-24 20:25:48.146159] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:17.967 [2024-04-24 20:25:48.147404] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:17.967 [2024-04-24 20:25:48.147445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.967 [2024-04-24 20:25:48.147458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.967 [2024-04-24 20:25:48.147476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.294 ms 00:19:17.967 [2024-04-24 20:25:48.147489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.967 [2024-04-24 20:25:48.147580] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c0c9aa20-1edc-4827-ab08-6b786b351665 00:19:17.967 [2024-04-24 20:25:48.149037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.967 [2024-04-24 20:25:48.149072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:17.967 [2024-04-24 20:25:48.149085] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:17.967 [2024-04-24 20:25:48.149098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.967 [2024-04-24 20:25:48.156719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.967 [2024-04-24 20:25:48.156752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.967 [2024-04-24 20:25:48.156764] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.589 ms 00:19:17.967 [2024-04-24 20:25:48.156778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.967 [2024-04-24 20:25:48.156896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.967 [2024-04-24 20:25:48.156915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.967 [2024-04-24 20:25:48.156926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:19:17.967 [2024-04-24 20:25:48.156939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.967 [2024-04-24 20:25:48.157006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.967 [2024-04-24 20:25:48.157025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:17.967 [2024-04-24 20:25:48.157035] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:17.967 [2024-04-24 20:25:48.157047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.967 [2024-04-24 20:25:48.157073] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:17.967 [2024-04-24 20:25:48.163105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.967 [2024-04-24 20:25:48.163137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.967 [2024-04-24 20:25:48.163153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.045 ms 00:19:17.967 [2024-04-24 20:25:48.163163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.967 [2024-04-24 20:25:48.163202] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.967 [2024-04-24 20:25:48.163214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:17.967 [2024-04-24 20:25:48.163228] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:17.967 [2024-04-24 20:25:48.163238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.967 [2024-04-24 20:25:48.163289] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:17.967 [2024-04-24 20:25:48.163399] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:17.967 [2024-04-24 20:25:48.163417] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:17.967 [2024-04-24 20:25:48.163430] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:17.967 [2024-04-24 20:25:48.163449] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:17.967 [2024-04-24 20:25:48.163464] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:17.967 [2024-04-24 20:25:48.163479] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:17.967 [2024-04-24 20:25:48.163489] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:17.967 [2024-04-24 20:25:48.163501] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:17.967 [2024-04-24 20:25:48.163511] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:17.967 [2024-04-24 20:25:48.163537] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.967 [2024-04-24 20:25:48.163548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:17.967 [2024-04-24 20:25:48.163562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:19:17.967 [2024-04-24 20:25:48.163573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.967 [2024-04-24 20:25:48.163636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.967 [2024-04-24 20:25:48.163647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:17.967 [2024-04-24 20:25:48.163663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:17.967 [2024-04-24 20:25:48.163673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.967 [2024-04-24 20:25:48.163744] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:17.967 [2024-04-24 20:25:48.163757] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:17.967 [2024-04-24 20:25:48.163773] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.967 [2024-04-24 20:25:48.163783] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.967 [2024-04-24 20:25:48.163797] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:17.967 [2024-04-24 20:25:48.163806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:17.967 [2024-04-24 20:25:48.163818] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:17.967 [2024-04-24 20:25:48.163828] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:17.967 [2024-04-24 20:25:48.163841] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:17.967 [2024-04-24 20:25:48.163851] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.967 [2024-04-24 20:25:48.163894] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:17.967 [2024-04-24 20:25:48.163909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:17.967 [2024-04-24 20:25:48.163921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.967 [2024-04-24 20:25:48.163930] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:17.967 [2024-04-24 20:25:48.163942] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:17.967 [2024-04-24 20:25:48.163952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.967 [2024-04-24 20:25:48.163965] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:17.967 [2024-04-24 20:25:48.163975] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:17.967 [2024-04-24 20:25:48.163988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.967 [2024-04-24 20:25:48.163998] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:17.967 [2024-04-24 20:25:48.164009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:17.967 [2024-04-24 20:25:48.164019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:17.968 [2024-04-24 20:25:48.164030] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:17.968 [2024-04-24 20:25:48.164039] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:17.968 [2024-04-24 20:25:48.164051] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:17.968 [2024-04-24 20:25:48.164060] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:17.968 [2024-04-24 20:25:48.164072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:17.968 [2024-04-24 20:25:48.164080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:17.968 [2024-04-24 20:25:48.164092] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:17.968 [2024-04-24 20:25:48.164101] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:17.968 [2024-04-24 20:25:48.164112] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:17.968 [2024-04-24 20:25:48.164121] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:17.968 [2024-04-24 20:25:48.164132] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:17.968 [2024-04-24 20:25:48.164141] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:17.968 [2024-04-24 20:25:48.164154] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:17.968 [2024-04-24 20:25:48.164163] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:17.968 [2024-04-24 20:25:48.164174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.968 [2024-04-24 20:25:48.164183] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:17.968 [2024-04-24 20:25:48.164194] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:17.968 [2024-04-24 20:25:48.164203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.968 [2024-04-24 20:25:48.164213] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:17.968 [2024-04-24 20:25:48.164223] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:17.968 [2024-04-24 20:25:48.164237] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.968 [2024-04-24 20:25:48.164249] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.968 [2024-04-24 20:25:48.164261] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:17.968 [2024-04-24 20:25:48.164270] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:17.968 [2024-04-24 20:25:48.164281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:17.968 [2024-04-24 20:25:48.164294] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:17.968 [2024-04-24 20:25:48.164306] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:17.968 [2024-04-24 20:25:48.164316] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:17.968 [2024-04-24 20:25:48.164331] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:17.968 [2024-04-24 20:25:48.164343] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.968 [2024-04-24 20:25:48.164357] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:17.968 [2024-04-24 20:25:48.164367] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:17.968 [2024-04-24 20:25:48.164380] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:17.968 [2024-04-24 20:25:48.164391] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:17.968 [2024-04-24 20:25:48.164403] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:17.968 [2024-04-24 20:25:48.164413] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:17.968 [2024-04-24 20:25:48.164425] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:17.968 [2024-04-24 20:25:48.164436] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:17.968 [2024-04-24 20:25:48.164448] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:17.968 [2024-04-24 20:25:48.164459] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:17.968 [2024-04-24 20:25:48.164472] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:17.968 [2024-04-24 20:25:48.164482] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:17.968 [2024-04-24 20:25:48.164495] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:17.968 [2024-04-24 20:25:48.164505] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:17.968 [2024-04-24 20:25:48.164521] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.968 [2024-04-24 20:25:48.164532] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:17.968 [2024-04-24 20:25:48.164545] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:17.968 [2024-04-24 20:25:48.164555] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:17.968 [2024-04-24 20:25:48.164568] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:17.968 [2024-04-24 20:25:48.164578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.968 [2024-04-24 20:25:48.164591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:17.968 [2024-04-24 20:25:48.164601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:19:17.968 [2024-04-24 20:25:48.164616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.968 [2024-04-24 20:25:48.189932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.968 [2024-04-24 20:25:48.189976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.968 [2024-04-24 20:25:48.189990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.311 ms 00:19:17.968 [2024-04-24 20:25:48.190002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.968 [2024-04-24 20:25:48.190093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.968 [2024-04-24 20:25:48.190109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:17.968 [2024-04-24 20:25:48.190119] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:17.968 [2024-04-24 20:25:48.190133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.231 [2024-04-24 20:25:48.243383] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.231 [2024-04-24 20:25:48.243433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:18.231 [2024-04-24 20:25:48.243448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.274 ms 00:19:18.231 [2024-04-24 20:25:48.243462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.231 [2024-04-24 20:25:48.243509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.231 [2024-04-24 20:25:48.243522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:18.231 [2024-04-24 20:25:48.243533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:18.231 [2024-04-24 20:25:48.243548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.231 [2024-04-24 20:25:48.244055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.231 [2024-04-24 20:25:48.244075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:18.231 [2024-04-24 20:25:48.244086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:19:18.231 [2024-04-24 20:25:48.244098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.231 [2024-04-24 20:25:48.244208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.231 [2024-04-24 20:25:48.244225] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:18.231 [2024-04-24 20:25:48.244235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:18.231 [2024-04-24 20:25:48.244249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.231 [2024-04-24 20:25:48.268749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.231 [2024-04-24 20:25:48.268799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:18.231 [2024-04-24 20:25:48.268814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.517 ms 00:19:18.231 [2024-04-24 20:25:48.268830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.231 [2024-04-24 20:25:48.283160] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:18.231 [2024-04-24 20:25:48.286472] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.231 [2024-04-24 20:25:48.286504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:18.231 [2024-04-24 20:25:48.286521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.540 ms 00:19:18.231 [2024-04-24 20:25:48.286531] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.231 [2024-04-24 20:25:48.374476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.231 [2024-04-24 20:25:48.374543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:18.231 [2024-04-24 20:25:48.374562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.041 ms 00:19:18.231 [2024-04-24 20:25:48.374573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.231 [2024-04-24 20:25:48.374630] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:19:18.231 [2024-04-24 20:25:48.374645] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:19:21.518 [2024-04-24 20:25:51.657183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.518 [2024-04-24 20:25:51.657256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:21.518 [2024-04-24 20:25:51.657277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3287.867 ms 00:19:21.518 [2024-04-24 20:25:51.657288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.518 [2024-04-24 20:25:51.657528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.518 [2024-04-24 20:25:51.657546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:21.518 [2024-04-24 20:25:51.657559] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:19:21.518 [2024-04-24 20:25:51.657569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.518 [2024-04-24 20:25:51.699184] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.518 [2024-04-24 20:25:51.699249] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:21.518 [2024-04-24 20:25:51.699270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.596 ms 00:19:21.518 [2024-04-24 20:25:51.699281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.518 [2024-04-24 20:25:51.740242] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.518 [2024-04-24 20:25:51.740306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:21.518 [2024-04-24 20:25:51.740329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.947 ms 00:19:21.518 [2024-04-24 20:25:51.740339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.518 [2024-04-24 20:25:51.740797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.518 [2024-04-24 20:25:51.740811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:21.518 [2024-04-24 20:25:51.740825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.387 ms 00:19:21.518 [2024-04-24 20:25:51.740839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.777 [2024-04-24 20:25:51.842440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.777 [2024-04-24 20:25:51.842507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:21.777 [2024-04-24 20:25:51.842528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 101.663 ms 00:19:21.777 [2024-04-24 20:25:51.842539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.777 [2024-04-24 20:25:51.885146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.777 [2024-04-24 20:25:51.885397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:21.777 [2024-04-24 20:25:51.885491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.597 ms 00:19:21.777 [2024-04-24 20:25:51.885528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.777 [2024-04-24 20:25:51.887822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.777 [2024-04-24 20:25:51.887870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:21.777 [2024-04-24 20:25:51.887898] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.226 ms 00:19:21.777 [2024-04-24 20:25:51.887909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.777 [2024-04-24 20:25:51.929319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.777 [2024-04-24 20:25:51.929382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:21.777 [2024-04-24 20:25:51.929401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.376 ms 00:19:21.777 [2024-04-24 20:25:51.929412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.777 [2024-04-24 20:25:51.929475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.777 [2024-04-24 20:25:51.929487] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:21.777 [2024-04-24 20:25:51.929501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:21.777 [2024-04-24 20:25:51.929514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.777 [2024-04-24 20:25:51.929633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.777 [2024-04-24 20:25:51.929646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:21.777 [2024-04-24 20:25:51.929659] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:21.777 [2024-04-24 20:25:51.929669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.777 [2024-04-24 20:25:51.930785] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3790.512 ms, result 0 00:19:21.777 { 00:19:21.777 "name": "ftl0", 00:19:21.777 "uuid": "c0c9aa20-1edc-4827-ab08-6b786b351665" 00:19:21.777 } 00:19:21.777 20:25:51 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:19:21.777 20:25:51 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:22.037 20:25:52 -- ftl/restore.sh@63 -- # echo ']}' 00:19:22.037 20:25:52 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:22.297 [2024-04-24 20:25:52.333590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.297 [2024-04-24 20:25:52.333665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:22.297 [2024-04-24 20:25:52.333683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:22.297 [2024-04-24 20:25:52.333700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.297 [2024-04-24 20:25:52.333728] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:22.297 [2024-04-24 20:25:52.337551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.297 [2024-04-24 20:25:52.337591] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:22.297 [2024-04-24 20:25:52.337611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.803 ms 00:19:22.297 [2024-04-24 20:25:52.337621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.297 [2024-04-24 20:25:52.337910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.297 [2024-04-24 20:25:52.337925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:22.297 [2024-04-24 20:25:52.337954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:22.297 [2024-04-24 20:25:52.337964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.297 [2024-04-24 20:25:52.340641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.297 [2024-04-24 20:25:52.340664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:22.297 [2024-04-24 20:25:52.340679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.660 ms 00:19:22.297 [2024-04-24 20:25:52.340691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.297 [2024-04-24 20:25:52.345822] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.297 [2024-04-24 20:25:52.345873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:22.297 [2024-04-24 20:25:52.345897] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.108 ms 00:19:22.297 [2024-04-24 20:25:52.345907] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.297 [2024-04-24 20:25:52.387211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.298 [2024-04-24 20:25:52.387281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:22.298 [2024-04-24 20:25:52.387302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.254 ms 00:19:22.298 [2024-04-24 20:25:52.387313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.298 [2024-04-24 20:25:52.411898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.298 [2024-04-24 20:25:52.411969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:22.298 [2024-04-24 20:25:52.411999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.538 ms 00:19:22.298 [2024-04-24 20:25:52.412010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.298 [2024-04-24 20:25:52.412224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.298 [2024-04-24 20:25:52.412239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:22.298 [2024-04-24 20:25:52.412256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:19:22.298 [2024-04-24 20:25:52.412267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.298 [2024-04-24 20:25:52.454792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.298 [2024-04-24 20:25:52.454864] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:22.298 [2024-04-24 20:25:52.454890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.563 ms 00:19:22.298 [2024-04-24 20:25:52.454901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.298 [2024-04-24 20:25:52.496231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.298 [2024-04-24 20:25:52.496298] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:22.298 [2024-04-24 20:25:52.496318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.316 ms 00:19:22.298 [2024-04-24 20:25:52.496328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.558 [2024-04-24 20:25:52.537833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.558 [2024-04-24 20:25:52.537905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:22.558 [2024-04-24 20:25:52.537926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.488 ms 00:19:22.558 [2024-04-24 20:25:52.537936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.558 [2024-04-24 20:25:52.578853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.558 [2024-04-24 20:25:52.578966] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:22.558 [2024-04-24 20:25:52.578987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.805 ms 00:19:22.558 [2024-04-24 20:25:52.578999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.558 [2024-04-24 20:25:52.579080] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:22.558 [2024-04-24 20:25:52.579102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:22.558 [2024-04-24 20:25:52.579314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.579991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:22.559 [2024-04-24 20:25:52.580783] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:22.559 [2024-04-24 20:25:52.580798] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c0c9aa20-1edc-4827-ab08-6b786b351665 00:19:22.559 [2024-04-24 20:25:52.580810] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:22.559 [2024-04-24 20:25:52.580824] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:22.559 [2024-04-24 20:25:52.580838] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:22.559 [2024-04-24 20:25:52.580852] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:22.560 [2024-04-24 20:25:52.580879] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:22.560 [2024-04-24 20:25:52.580894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:22.560 [2024-04-24 20:25:52.580905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:22.560 [2024-04-24 20:25:52.580917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:22.560 [2024-04-24 20:25:52.580926] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:22.560 [2024-04-24 20:25:52.580942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.560 [2024-04-24 20:25:52.580957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:22.560 [2024-04-24 20:25:52.580974] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.868 ms 00:19:22.560 [2024-04-24 20:25:52.580984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.560 [2024-04-24 20:25:52.601847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.560 [2024-04-24 20:25:52.601924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:22.560 [2024-04-24 20:25:52.601943] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.787 ms 00:19:22.560 [2024-04-24 20:25:52.601954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.560 [2024-04-24 20:25:52.602210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.560 [2024-04-24 20:25:52.602221] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:22.560 [2024-04-24 20:25:52.602235] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:19:22.560 [2024-04-24 20:25:52.602245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.560 [2024-04-24 20:25:52.673081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.560 [2024-04-24 20:25:52.673156] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.560 [2024-04-24 20:25:52.673178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.560 [2024-04-24 20:25:52.673189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.560 [2024-04-24 20:25:52.673289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.560 [2024-04-24 20:25:52.673302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.560 [2024-04-24 20:25:52.673319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.560 [2024-04-24 20:25:52.673330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.560 [2024-04-24 20:25:52.673451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.560 [2024-04-24 20:25:52.673465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.560 [2024-04-24 20:25:52.673493] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.560 [2024-04-24 20:25:52.673503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.560 [2024-04-24 20:25:52.673525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.560 [2024-04-24 20:25:52.673539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.560 [2024-04-24 20:25:52.673552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.560 [2024-04-24 20:25:52.673577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.829 [2024-04-24 20:25:52.796415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.829 [2024-04-24 20:25:52.796480] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.829 [2024-04-24 20:25:52.796499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.829 [2024-04-24 20:25:52.796509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.829 [2024-04-24 20:25:52.843248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.829 [2024-04-24 20:25:52.843319] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.829 [2024-04-24 20:25:52.843341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.829 [2024-04-24 20:25:52.843351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.829 [2024-04-24 20:25:52.843455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.829 [2024-04-24 20:25:52.843467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.829 [2024-04-24 20:25:52.843481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.829 [2024-04-24 20:25:52.843492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.829 [2024-04-24 20:25:52.843544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.829 [2024-04-24 20:25:52.843555] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.829 [2024-04-24 20:25:52.843572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.829 [2024-04-24 20:25:52.843582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.829 [2024-04-24 20:25:52.843698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.829 [2024-04-24 20:25:52.843712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.829 [2024-04-24 20:25:52.843724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.829 [2024-04-24 20:25:52.843734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.829 [2024-04-24 20:25:52.843777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.829 [2024-04-24 20:25:52.843789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:22.829 [2024-04-24 20:25:52.843805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.829 [2024-04-24 20:25:52.843817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.829 [2024-04-24 20:25:52.843889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.829 [2024-04-24 20:25:52.843901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.829 [2024-04-24 20:25:52.843931] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.829 [2024-04-24 20:25:52.843941] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.829 [2024-04-24 20:25:52.843994] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.829 [2024-04-24 20:25:52.844005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.829 [2024-04-24 20:25:52.844021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.829 [2024-04-24 20:25:52.844031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.829 [2024-04-24 20:25:52.844180] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 511.385 ms, result 0 00:19:22.829 true 00:19:22.829 20:25:52 -- ftl/restore.sh@66 -- # killprocess 79236 00:19:22.829 20:25:52 -- common/autotest_common.sh@936 -- # '[' -z 79236 ']' 00:19:22.829 20:25:52 -- common/autotest_common.sh@940 -- # kill -0 79236 00:19:22.829 20:25:52 -- common/autotest_common.sh@941 -- # uname 00:19:22.829 20:25:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:22.829 20:25:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79236 00:19:22.829 killing process with pid 79236 00:19:22.829 20:25:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:22.829 20:25:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:22.829 20:25:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79236' 00:19:22.829 20:25:52 -- common/autotest_common.sh@955 -- # kill 79236 00:19:22.829 20:25:52 -- common/autotest_common.sh@960 -- # wait 79236 00:19:28.107 20:25:58 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:32.495 262144+0 records in 00:19:32.495 262144+0 records out 00:19:32.495 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.97925 s, 270 MB/s 00:19:32.495 20:26:02 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:33.876 20:26:04 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:34.135 [2024-04-24 20:26:04.133561] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:19:34.135 [2024-04-24 20:26:04.133678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79476 ] 00:19:34.135 [2024-04-24 20:26:04.294075] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.393 [2024-04-24 20:26:04.538225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.963 [2024-04-24 20:26:04.951576] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:34.963 [2024-04-24 20:26:04.951663] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:34.963 [2024-04-24 20:26:05.107602] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-04-24 20:26:05.107665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:34.963 [2024-04-24 20:26:05.107682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:34.963 [2024-04-24 20:26:05.107693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-04-24 20:26:05.107755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-04-24 20:26:05.107769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.963 [2024-04-24 20:26:05.107779] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:34.963 [2024-04-24 20:26:05.107789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-04-24 20:26:05.107810] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:34.963 [2024-04-24 20:26:05.108973] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:34.963 [2024-04-24 20:26:05.109006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-04-24 20:26:05.109017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.963 [2024-04-24 20:26:05.109028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.202 ms 00:19:34.963 [2024-04-24 20:26:05.109037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-04-24 20:26:05.110486] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:34.963 [2024-04-24 20:26:05.131164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-04-24 20:26:05.131206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:34.963 [2024-04-24 20:26:05.131227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.713 ms 00:19:34.963 [2024-04-24 20:26:05.131237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-04-24 20:26:05.131299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-04-24 20:26:05.131311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:34.963 [2024-04-24 20:26:05.131322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:34.963 [2024-04-24 20:26:05.131331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-04-24 20:26:05.138192] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-04-24 20:26:05.138220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.963 [2024-04-24 20:26:05.138231] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.800 ms 00:19:34.963 [2024-04-24 20:26:05.138241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-04-24 20:26:05.138332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-04-24 20:26:05.138346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.963 [2024-04-24 20:26:05.138357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:19:34.963 [2024-04-24 20:26:05.138366] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-04-24 20:26:05.138407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-04-24 20:26:05.138421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:34.963 [2024-04-24 20:26:05.138432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:34.963 [2024-04-24 20:26:05.138441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-04-24 20:26:05.138468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:34.963 [2024-04-24 20:26:05.144225] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-04-24 20:26:05.144256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.963 [2024-04-24 20:26:05.144269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.773 ms 00:19:34.963 [2024-04-24 20:26:05.144279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-04-24 20:26:05.144307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-04-24 20:26:05.144318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:34.963 [2024-04-24 20:26:05.144328] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:34.963 [2024-04-24 20:26:05.144338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-04-24 20:26:05.144387] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:34.963 [2024-04-24 20:26:05.144415] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:34.963 [2024-04-24 20:26:05.144448] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:34.963 [2024-04-24 20:26:05.144464] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:34.963 [2024-04-24 20:26:05.144531] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:34.963 [2024-04-24 20:26:05.144544] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:34.963 [2024-04-24 20:26:05.144557] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:34.963 [2024-04-24 20:26:05.144570] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:34.963 [2024-04-24 20:26:05.144582] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:34.963 [2024-04-24 20:26:05.144596] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:34.963 [2024-04-24 20:26:05.144605] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:34.963 [2024-04-24 20:26:05.144614] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:34.963 [2024-04-24 20:26:05.144624] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:34.963 [2024-04-24 20:26:05.144634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-04-24 20:26:05.144644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:34.963 [2024-04-24 20:26:05.144654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:19:34.963 [2024-04-24 20:26:05.144663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-04-24 20:26:05.144718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.963 [2024-04-24 20:26:05.144729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:34.963 [2024-04-24 20:26:05.144741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:34.963 [2024-04-24 20:26:05.144751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.963 [2024-04-24 20:26:05.144815] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:34.963 [2024-04-24 20:26:05.144827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:34.963 [2024-04-24 20:26:05.144837] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.963 [2024-04-24 20:26:05.144847] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.963 [2024-04-24 20:26:05.144873] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:34.963 [2024-04-24 20:26:05.144883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:34.963 [2024-04-24 20:26:05.144892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:34.963 [2024-04-24 20:26:05.144902] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:34.963 [2024-04-24 20:26:05.144912] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:34.963 [2024-04-24 20:26:05.144921] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.963 [2024-04-24 20:26:05.144931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:34.963 [2024-04-24 20:26:05.144941] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:34.963 [2024-04-24 20:26:05.144981] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.963 [2024-04-24 20:26:05.144991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:34.963 [2024-04-24 20:26:05.145001] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:34.963 [2024-04-24 20:26:05.145010] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.963 [2024-04-24 20:26:05.145019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:34.963 [2024-04-24 20:26:05.145028] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:34.963 [2024-04-24 20:26:05.145037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.963 [2024-04-24 20:26:05.145046] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:34.963 [2024-04-24 20:26:05.145055] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:34.963 [2024-04-24 20:26:05.145064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:34.963 [2024-04-24 20:26:05.145073] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:34.964 [2024-04-24 20:26:05.145082] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:34.964 [2024-04-24 20:26:05.145090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:34.964 [2024-04-24 20:26:05.145099] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:34.964 [2024-04-24 20:26:05.145108] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:34.964 [2024-04-24 20:26:05.145117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:34.964 [2024-04-24 20:26:05.145133] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:34.964 [2024-04-24 20:26:05.145142] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:34.964 [2024-04-24 20:26:05.145150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:34.964 [2024-04-24 20:26:05.145159] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:34.964 [2024-04-24 20:26:05.145168] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:34.964 [2024-04-24 20:26:05.145177] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:34.964 [2024-04-24 20:26:05.145185] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:34.964 [2024-04-24 20:26:05.145194] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:34.964 [2024-04-24 20:26:05.145203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.964 [2024-04-24 20:26:05.145212] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:34.964 [2024-04-24 20:26:05.145221] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:34.964 [2024-04-24 20:26:05.145230] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.964 [2024-04-24 20:26:05.145238] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:34.964 [2024-04-24 20:26:05.145248] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:34.964 [2024-04-24 20:26:05.145261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.964 [2024-04-24 20:26:05.145277] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.964 [2024-04-24 20:26:05.145287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:34.964 [2024-04-24 20:26:05.145296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:34.964 [2024-04-24 20:26:05.145305] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:34.964 [2024-04-24 20:26:05.145314] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:34.964 [2024-04-24 20:26:05.145322] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:34.964 [2024-04-24 20:26:05.145332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:34.964 [2024-04-24 20:26:05.145342] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:34.964 [2024-04-24 20:26:05.145354] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.964 [2024-04-24 20:26:05.145365] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:34.964 [2024-04-24 20:26:05.145375] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:34.964 [2024-04-24 20:26:05.145385] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:34.964 [2024-04-24 20:26:05.145395] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:34.964 [2024-04-24 20:26:05.145404] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:34.964 [2024-04-24 20:26:05.145415] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:34.964 [2024-04-24 20:26:05.145425] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:34.964 [2024-04-24 20:26:05.145435] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:34.964 [2024-04-24 20:26:05.145445] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:34.964 [2024-04-24 20:26:05.145455] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:34.964 [2024-04-24 20:26:05.145465] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:34.964 [2024-04-24 20:26:05.145475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:34.964 [2024-04-24 20:26:05.145486] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:34.964 [2024-04-24 20:26:05.145496] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:34.964 [2024-04-24 20:26:05.145507] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.964 [2024-04-24 20:26:05.145517] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:34.964 [2024-04-24 20:26:05.145527] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:34.964 [2024-04-24 20:26:05.145537] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:34.964 [2024-04-24 20:26:05.145547] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:34.964 [2024-04-24 20:26:05.145558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.964 [2024-04-24 20:26:05.145568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:34.964 [2024-04-24 20:26:05.145578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:19:34.964 [2024-04-24 20:26:05.145587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.964 [2024-04-24 20:26:05.170734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.964 [2024-04-24 20:26:05.170772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:34.964 [2024-04-24 20:26:05.170793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.143 ms 00:19:34.964 [2024-04-24 20:26:05.170803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.964 [2024-04-24 20:26:05.170898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.964 [2024-04-24 20:26:05.170914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:34.964 [2024-04-24 20:26:05.170925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:34.964 [2024-04-24 20:26:05.170935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.224 [2024-04-24 20:26:05.235326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.224 [2024-04-24 20:26:05.235370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:35.224 [2024-04-24 20:26:05.235396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.437 ms 00:19:35.224 [2024-04-24 20:26:05.235410] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.224 [2024-04-24 20:26:05.235466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.224 [2024-04-24 20:26:05.235478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:35.224 [2024-04-24 20:26:05.235489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:35.224 [2024-04-24 20:26:05.235499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.224 [2024-04-24 20:26:05.236018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.224 [2024-04-24 20:26:05.236037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:35.224 [2024-04-24 20:26:05.236048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:19:35.224 [2024-04-24 20:26:05.236059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.224 [2024-04-24 20:26:05.236183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.224 [2024-04-24 20:26:05.236197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:35.224 [2024-04-24 20:26:05.236208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:35.224 [2024-04-24 20:26:05.236218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.224 [2024-04-24 20:26:05.260336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.224 [2024-04-24 20:26:05.260380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:35.224 [2024-04-24 20:26:05.260395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.134 ms 00:19:35.224 [2024-04-24 20:26:05.260414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.224 [2024-04-24 20:26:05.280788] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:35.224 [2024-04-24 20:26:05.280839] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:35.224 [2024-04-24 20:26:05.280868] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.224 [2024-04-24 20:26:05.280882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:35.224 [2024-04-24 20:26:05.280900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.353 ms 00:19:35.224 [2024-04-24 20:26:05.280916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.224 [2024-04-24 20:26:05.311438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.224 [2024-04-24 20:26:05.311510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:35.224 [2024-04-24 20:26:05.311533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.514 ms 00:19:35.224 [2024-04-24 20:26:05.311548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.224 [2024-04-24 20:26:05.330758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.224 [2024-04-24 20:26:05.330833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:35.224 [2024-04-24 20:26:05.330885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.164 ms 00:19:35.224 [2024-04-24 20:26:05.330902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.224 [2024-04-24 20:26:05.348278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.224 [2024-04-24 20:26:05.348340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:35.224 [2024-04-24 20:26:05.348359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.324 ms 00:19:35.224 [2024-04-24 20:26:05.348372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.224 [2024-04-24 20:26:05.348882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.224 [2024-04-24 20:26:05.348915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:35.224 [2024-04-24 20:26:05.348932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:19:35.224 [2024-04-24 20:26:05.348946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.224 [2024-04-24 20:26:05.438861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.224 [2024-04-24 20:26:05.438942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:35.224 [2024-04-24 20:26:05.438960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.023 ms 00:19:35.224 [2024-04-24 20:26:05.438970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.224 [2024-04-24 20:26:05.454546] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:35.483 [2024-04-24 20:26:05.457946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.483 [2024-04-24 20:26:05.457985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:35.483 [2024-04-24 20:26:05.457999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.928 ms 00:19:35.483 [2024-04-24 20:26:05.458010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.483 [2024-04-24 20:26:05.458125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.483 [2024-04-24 20:26:05.458138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:35.483 [2024-04-24 20:26:05.458157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:35.483 [2024-04-24 20:26:05.458167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.483 [2024-04-24 20:26:05.458237] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.483 [2024-04-24 20:26:05.458248] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:35.483 [2024-04-24 20:26:05.458259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:35.483 [2024-04-24 20:26:05.458268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.483 [2024-04-24 20:26:05.460416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.483 [2024-04-24 20:26:05.460447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:35.483 [2024-04-24 20:26:05.460458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.133 ms 00:19:35.483 [2024-04-24 20:26:05.460472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.483 [2024-04-24 20:26:05.460502] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.483 [2024-04-24 20:26:05.460512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:35.483 [2024-04-24 20:26:05.460522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:35.483 [2024-04-24 20:26:05.460532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.483 [2024-04-24 20:26:05.460566] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:35.483 [2024-04-24 20:26:05.460579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.483 [2024-04-24 20:26:05.460589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:35.483 [2024-04-24 20:26:05.460599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:35.483 [2024-04-24 20:26:05.460608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.483 [2024-04-24 20:26:05.499976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.483 [2024-04-24 20:26:05.500037] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:35.483 [2024-04-24 20:26:05.500052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.408 ms 00:19:35.483 [2024-04-24 20:26:05.500063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.483 [2024-04-24 20:26:05.500152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:35.483 [2024-04-24 20:26:05.500164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:35.483 [2024-04-24 20:26:05.500176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:35.483 [2024-04-24 20:26:05.500192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:35.483 [2024-04-24 20:26:05.501294] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 393.877 ms, result 0 00:20:08.235  Copying: 31/1024 [MB] (31 MBps) Copying: 59/1024 [MB] (27 MBps) Copying: 89/1024 [MB] (29 MBps) Copying: 119/1024 [MB] (30 MBps) Copying: 150/1024 [MB] (31 MBps) Copying: 181/1024 [MB] (30 MBps) Copying: 211/1024 [MB] (30 MBps) Copying: 241/1024 [MB] (30 MBps) Copying: 272/1024 [MB] (30 MBps) Copying: 304/1024 [MB] (32 MBps) Copying: 334/1024 [MB] (30 MBps) Copying: 365/1024 [MB] (30 MBps) Copying: 396/1024 [MB] (30 MBps) Copying: 428/1024 [MB] (31 MBps) Copying: 459/1024 [MB] (31 MBps) Copying: 491/1024 [MB] (31 MBps) Copying: 522/1024 [MB] (31 MBps) Copying: 553/1024 [MB] (31 MBps) Copying: 584/1024 [MB] (30 MBps) Copying: 615/1024 [MB] (31 MBps) Copying: 646/1024 [MB] (31 MBps) Copying: 677/1024 [MB] (31 MBps) Copying: 709/1024 [MB] (31 MBps) Copying: 740/1024 [MB] (31 MBps) Copying: 771/1024 [MB] (30 MBps) Copying: 803/1024 [MB] (31 MBps) Copying: 834/1024 [MB] (31 MBps) Copying: 866/1024 [MB] (31 MBps) Copying: 898/1024 [MB] (31 MBps) Copying: 928/1024 [MB] (30 MBps) Copying: 960/1024 [MB] (32 MBps) Copying: 991/1024 [MB] (30 MBps) Copying: 1024/1024 [MB] (average 31 MBps)[2024-04-24 20:26:38.443523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.235 [2024-04-24 20:26:38.443581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:08.235 [2024-04-24 20:26:38.443604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:08.235 [2024-04-24 20:26:38.443615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.235 [2024-04-24 20:26:38.443638] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:08.235 [2024-04-24 20:26:38.447344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.235 [2024-04-24 20:26:38.447380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:08.235 [2024-04-24 20:26:38.447394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.694 ms 00:20:08.235 [2024-04-24 20:26:38.447405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.235 [2024-04-24 20:26:38.449058] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.235 [2024-04-24 20:26:38.449106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:08.235 [2024-04-24 20:26:38.449126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:20:08.235 [2024-04-24 20:26:38.449136] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.235 [2024-04-24 20:26:38.467368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.235 [2024-04-24 20:26:38.467408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:08.235 [2024-04-24 20:26:38.467422] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.243 ms 00:20:08.236 [2024-04-24 20:26:38.467433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.497 [2024-04-24 20:26:38.472717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.497 [2024-04-24 20:26:38.472749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:08.497 [2024-04-24 20:26:38.472761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.259 ms 00:20:08.497 [2024-04-24 20:26:38.472781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.497 [2024-04-24 20:26:38.512908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.497 [2024-04-24 20:26:38.512956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:08.497 [2024-04-24 20:26:38.512971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.142 ms 00:20:08.497 [2024-04-24 20:26:38.512980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.497 [2024-04-24 20:26:38.535723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.497 [2024-04-24 20:26:38.535773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:08.497 [2024-04-24 20:26:38.535789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.737 ms 00:20:08.497 [2024-04-24 20:26:38.535800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.497 [2024-04-24 20:26:38.535962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.497 [2024-04-24 20:26:38.535977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:08.497 [2024-04-24 20:26:38.536007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:20:08.497 [2024-04-24 20:26:38.536017] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.497 [2024-04-24 20:26:38.574845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.497 [2024-04-24 20:26:38.574894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:08.497 [2024-04-24 20:26:38.574908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.873 ms 00:20:08.497 [2024-04-24 20:26:38.574918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.497 [2024-04-24 20:26:38.613277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.497 [2024-04-24 20:26:38.613322] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:08.497 [2024-04-24 20:26:38.613337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.382 ms 00:20:08.497 [2024-04-24 20:26:38.613347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.497 [2024-04-24 20:26:38.651078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.497 [2024-04-24 20:26:38.651135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:08.497 [2024-04-24 20:26:38.651149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.753 ms 00:20:08.497 [2024-04-24 20:26:38.651159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.497 [2024-04-24 20:26:38.689284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.497 [2024-04-24 20:26:38.689327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:08.497 [2024-04-24 20:26:38.689342] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.108 ms 00:20:08.497 [2024-04-24 20:26:38.689351] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.497 [2024-04-24 20:26:38.689389] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:08.497 [2024-04-24 20:26:38.689406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:08.497 [2024-04-24 20:26:38.689694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.689995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:08.498 [2024-04-24 20:26:38.690490] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:08.498 [2024-04-24 20:26:38.690500] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c0c9aa20-1edc-4827-ab08-6b786b351665 00:20:08.498 [2024-04-24 20:26:38.690510] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:08.498 [2024-04-24 20:26:38.690520] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:08.498 [2024-04-24 20:26:38.690529] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:08.498 [2024-04-24 20:26:38.690545] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:08.498 [2024-04-24 20:26:38.690567] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:08.498 [2024-04-24 20:26:38.690578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:08.498 [2024-04-24 20:26:38.690588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:08.498 [2024-04-24 20:26:38.690597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:08.498 [2024-04-24 20:26:38.690605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:08.498 [2024-04-24 20:26:38.690615] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.498 [2024-04-24 20:26:38.690624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:08.498 [2024-04-24 20:26:38.690634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.229 ms 00:20:08.498 [2024-04-24 20:26:38.690643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.498 [2024-04-24 20:26:38.710955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.498 [2024-04-24 20:26:38.711000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:08.498 [2024-04-24 20:26:38.711020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.312 ms 00:20:08.498 [2024-04-24 20:26:38.711030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.498 [2024-04-24 20:26:38.711283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:08.498 [2024-04-24 20:26:38.711294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:08.498 [2024-04-24 20:26:38.711304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:20:08.499 [2024-04-24 20:26:38.711313] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.758 [2024-04-24 20:26:38.766699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.758 [2024-04-24 20:26:38.766761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:08.758 [2024-04-24 20:26:38.766775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.758 [2024-04-24 20:26:38.766785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.758 [2024-04-24 20:26:38.766890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.758 [2024-04-24 20:26:38.766903] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:08.758 [2024-04-24 20:26:38.766913] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.758 [2024-04-24 20:26:38.766924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.758 [2024-04-24 20:26:38.766998] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.758 [2024-04-24 20:26:38.767011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:08.758 [2024-04-24 20:26:38.767026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.758 [2024-04-24 20:26:38.767036] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.758 [2024-04-24 20:26:38.767054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.758 [2024-04-24 20:26:38.767064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:08.758 [2024-04-24 20:26:38.767074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.758 [2024-04-24 20:26:38.767085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.758 [2024-04-24 20:26:38.886693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.758 [2024-04-24 20:26:38.886758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:08.758 [2024-04-24 20:26:38.886774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.758 [2024-04-24 20:26:38.886784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.758 [2024-04-24 20:26:38.934485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.758 [2024-04-24 20:26:38.934548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:08.758 [2024-04-24 20:26:38.934563] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.758 [2024-04-24 20:26:38.934573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.758 [2024-04-24 20:26:38.934634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.758 [2024-04-24 20:26:38.934644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:08.758 [2024-04-24 20:26:38.934654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.758 [2024-04-24 20:26:38.934673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.758 [2024-04-24 20:26:38.934708] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.758 [2024-04-24 20:26:38.934718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:08.758 [2024-04-24 20:26:38.934728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.758 [2024-04-24 20:26:38.934738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.758 [2024-04-24 20:26:38.934852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.758 [2024-04-24 20:26:38.934884] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:08.758 [2024-04-24 20:26:38.934894] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.758 [2024-04-24 20:26:38.934904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.758 [2024-04-24 20:26:38.934943] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.758 [2024-04-24 20:26:38.934955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:08.758 [2024-04-24 20:26:38.934965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.758 [2024-04-24 20:26:38.934974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.758 [2024-04-24 20:26:38.935009] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.758 [2024-04-24 20:26:38.935020] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:08.758 [2024-04-24 20:26:38.935030] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.758 [2024-04-24 20:26:38.935039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.758 [2024-04-24 20:26:38.935084] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:08.758 [2024-04-24 20:26:38.935096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:08.758 [2024-04-24 20:26:38.935106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:08.758 [2024-04-24 20:26:38.935115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:08.758 [2024-04-24 20:26:38.935244] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 492.488 ms, result 0 00:20:10.665 00:20:10.665 00:20:10.665 20:26:40 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:20:10.665 [2024-04-24 20:26:40.826913] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:20:10.665 [2024-04-24 20:26:40.827032] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79850 ] 00:20:10.923 [2024-04-24 20:26:40.996323] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.182 [2024-04-24 20:26:41.231738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.442 [2024-04-24 20:26:41.635220] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:11.442 [2024-04-24 20:26:41.635295] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:11.702 [2024-04-24 20:26:41.789544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.702 [2024-04-24 20:26:41.789598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:11.702 [2024-04-24 20:26:41.789624] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:11.702 [2024-04-24 20:26:41.789639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.702 [2024-04-24 20:26:41.789728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.702 [2024-04-24 20:26:41.789750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:11.702 [2024-04-24 20:26:41.789766] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:11.702 [2024-04-24 20:26:41.789780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.702 [2024-04-24 20:26:41.789813] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:11.702 [2024-04-24 20:26:41.790922] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:11.702 [2024-04-24 20:26:41.790966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.702 [2024-04-24 20:26:41.790985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:11.702 [2024-04-24 20:26:41.791001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.162 ms 00:20:11.702 [2024-04-24 20:26:41.791016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.702 [2024-04-24 20:26:41.792609] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:11.702 [2024-04-24 20:26:41.812770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.702 [2024-04-24 20:26:41.812810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:11.702 [2024-04-24 20:26:41.812840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.194 ms 00:20:11.702 [2024-04-24 20:26:41.812871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.702 [2024-04-24 20:26:41.812950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.702 [2024-04-24 20:26:41.812970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:11.702 [2024-04-24 20:26:41.812988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:11.702 [2024-04-24 20:26:41.813004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.702 [2024-04-24 20:26:41.820037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.702 [2024-04-24 20:26:41.820070] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:11.702 [2024-04-24 20:26:41.820090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.944 ms 00:20:11.702 [2024-04-24 20:26:41.820104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.702 [2024-04-24 20:26:41.820229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.702 [2024-04-24 20:26:41.820251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:11.702 [2024-04-24 20:26:41.820268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:11.702 [2024-04-24 20:26:41.820284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.702 [2024-04-24 20:26:41.820346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.702 [2024-04-24 20:26:41.820369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:11.702 [2024-04-24 20:26:41.820387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:11.703 [2024-04-24 20:26:41.820403] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.703 [2024-04-24 20:26:41.820442] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:11.703 [2024-04-24 20:26:41.826346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.703 [2024-04-24 20:26:41.826382] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:11.703 [2024-04-24 20:26:41.826403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.922 ms 00:20:11.703 [2024-04-24 20:26:41.826418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.703 [2024-04-24 20:26:41.826468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.703 [2024-04-24 20:26:41.826484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:11.703 [2024-04-24 20:26:41.826501] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:11.703 [2024-04-24 20:26:41.826516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.703 [2024-04-24 20:26:41.826584] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:11.703 [2024-04-24 20:26:41.826624] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:11.703 [2024-04-24 20:26:41.826672] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:11.703 [2024-04-24 20:26:41.826700] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:11.703 [2024-04-24 20:26:41.826782] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:11.703 [2024-04-24 20:26:41.826815] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:11.703 [2024-04-24 20:26:41.826835] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:11.703 [2024-04-24 20:26:41.826874] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:11.703 [2024-04-24 20:26:41.826897] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:11.703 [2024-04-24 20:26:41.826919] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:11.703 [2024-04-24 20:26:41.826934] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:11.703 [2024-04-24 20:26:41.826950] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:11.703 [2024-04-24 20:26:41.826965] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:11.703 [2024-04-24 20:26:41.826983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.703 [2024-04-24 20:26:41.827016] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:11.703 [2024-04-24 20:26:41.827033] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:20:11.703 [2024-04-24 20:26:41.827050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.703 [2024-04-24 20:26:41.827139] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.703 [2024-04-24 20:26:41.827160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:11.703 [2024-04-24 20:26:41.827182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:11.703 [2024-04-24 20:26:41.827198] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.703 [2024-04-24 20:26:41.827289] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:11.703 [2024-04-24 20:26:41.827312] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:11.703 [2024-04-24 20:26:41.827329] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.703 [2024-04-24 20:26:41.827347] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.703 [2024-04-24 20:26:41.827363] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:11.703 [2024-04-24 20:26:41.827378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:11.703 [2024-04-24 20:26:41.827396] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:11.703 [2024-04-24 20:26:41.827410] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:11.703 [2024-04-24 20:26:41.827426] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:11.703 [2024-04-24 20:26:41.827444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.703 [2024-04-24 20:26:41.827461] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:11.703 [2024-04-24 20:26:41.827478] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:11.703 [2024-04-24 20:26:41.827507] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:11.703 [2024-04-24 20:26:41.827523] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:11.703 [2024-04-24 20:26:41.827539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:11.703 [2024-04-24 20:26:41.827553] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.703 [2024-04-24 20:26:41.827567] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:11.703 [2024-04-24 20:26:41.827582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:11.703 [2024-04-24 20:26:41.827597] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.703 [2024-04-24 20:26:41.827614] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:11.703 [2024-04-24 20:26:41.827630] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:11.703 [2024-04-24 20:26:41.827647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:11.703 [2024-04-24 20:26:41.827663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:11.703 [2024-04-24 20:26:41.827679] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:11.703 [2024-04-24 20:26:41.827694] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:11.703 [2024-04-24 20:26:41.827709] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:11.703 [2024-04-24 20:26:41.827725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:11.703 [2024-04-24 20:26:41.827740] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:11.703 [2024-04-24 20:26:41.827755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:11.703 [2024-04-24 20:26:41.827771] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:11.703 [2024-04-24 20:26:41.827787] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:11.703 [2024-04-24 20:26:41.827802] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:11.703 [2024-04-24 20:26:41.827817] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:11.703 [2024-04-24 20:26:41.827833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:11.703 [2024-04-24 20:26:41.827849] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:11.703 [2024-04-24 20:26:41.827863] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:11.703 [2024-04-24 20:26:41.827894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.703 [2024-04-24 20:26:41.827911] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:11.703 [2024-04-24 20:26:41.827928] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:11.703 [2024-04-24 20:26:41.827942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:11.703 [2024-04-24 20:26:41.827958] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:11.703 [2024-04-24 20:26:41.827973] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:11.703 [2024-04-24 20:26:41.827996] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:11.703 [2024-04-24 20:26:41.828020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:11.703 [2024-04-24 20:26:41.828037] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:11.703 [2024-04-24 20:26:41.828054] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:11.703 [2024-04-24 20:26:41.828069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:11.703 [2024-04-24 20:26:41.828097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:11.703 [2024-04-24 20:26:41.828111] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:11.703 [2024-04-24 20:26:41.828126] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:11.703 [2024-04-24 20:26:41.828143] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:11.703 [2024-04-24 20:26:41.828163] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.703 [2024-04-24 20:26:41.828182] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:11.703 [2024-04-24 20:26:41.828199] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:11.703 [2024-04-24 20:26:41.828215] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:11.703 [2024-04-24 20:26:41.828234] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:11.703 [2024-04-24 20:26:41.828249] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:11.703 [2024-04-24 20:26:41.828266] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:11.703 [2024-04-24 20:26:41.828283] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:11.703 [2024-04-24 20:26:41.828300] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:11.703 [2024-04-24 20:26:41.828315] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:11.703 [2024-04-24 20:26:41.828332] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:11.703 [2024-04-24 20:26:41.828348] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:11.703 [2024-04-24 20:26:41.828364] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:11.703 [2024-04-24 20:26:41.828381] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:11.703 [2024-04-24 20:26:41.828395] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:11.703 [2024-04-24 20:26:41.828413] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:11.703 [2024-04-24 20:26:41.828430] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:11.703 [2024-04-24 20:26:41.828447] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:11.704 [2024-04-24 20:26:41.828464] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:11.704 [2024-04-24 20:26:41.828482] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:11.704 [2024-04-24 20:26:41.828500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.704 [2024-04-24 20:26:41.828516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:11.704 [2024-04-24 20:26:41.828532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.254 ms 00:20:11.704 [2024-04-24 20:26:41.828547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.704 [2024-04-24 20:26:41.854809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.704 [2024-04-24 20:26:41.854873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:11.704 [2024-04-24 20:26:41.854896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.236 ms 00:20:11.704 [2024-04-24 20:26:41.854911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.704 [2024-04-24 20:26:41.855022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.704 [2024-04-24 20:26:41.855048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:11.704 [2024-04-24 20:26:41.855066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:11.704 [2024-04-24 20:26:41.855081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.704 [2024-04-24 20:26:41.918894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.704 [2024-04-24 20:26:41.918945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:11.704 [2024-04-24 20:26:41.918969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.821 ms 00:20:11.704 [2024-04-24 20:26:41.918990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.704 [2024-04-24 20:26:41.919069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.704 [2024-04-24 20:26:41.919087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:11.704 [2024-04-24 20:26:41.919101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:11.704 [2024-04-24 20:26:41.919115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.704 [2024-04-24 20:26:41.919782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.704 [2024-04-24 20:26:41.919818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:11.704 [2024-04-24 20:26:41.919840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:20:11.704 [2024-04-24 20:26:41.919876] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.704 [2024-04-24 20:26:41.920102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.704 [2024-04-24 20:26:41.920131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:11.704 [2024-04-24 20:26:41.920149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:20:11.704 [2024-04-24 20:26:41.920166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.963 [2024-04-24 20:26:41.943091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.963 [2024-04-24 20:26:41.943142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:11.963 [2024-04-24 20:26:41.943165] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.922 ms 00:20:11.964 [2024-04-24 20:26:41.943181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:41.963016] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:11.964 [2024-04-24 20:26:41.963061] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:11.964 [2024-04-24 20:26:41.963085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.964 [2024-04-24 20:26:41.963101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:11.964 [2024-04-24 20:26:41.963117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.752 ms 00:20:11.964 [2024-04-24 20:26:41.963131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:41.993437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.964 [2024-04-24 20:26:41.993484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:11.964 [2024-04-24 20:26:41.993506] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.298 ms 00:20:11.964 [2024-04-24 20:26:41.993523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:42.013308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.964 [2024-04-24 20:26:42.013355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:11.964 [2024-04-24 20:26:42.013376] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.756 ms 00:20:11.964 [2024-04-24 20:26:42.013404] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:42.031707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.964 [2024-04-24 20:26:42.031748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:11.964 [2024-04-24 20:26:42.031769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.269 ms 00:20:11.964 [2024-04-24 20:26:42.031785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:42.032295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.964 [2024-04-24 20:26:42.032331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:11.964 [2024-04-24 20:26:42.032351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:20:11.964 [2024-04-24 20:26:42.032367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:42.124616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.964 [2024-04-24 20:26:42.124682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:11.964 [2024-04-24 20:26:42.124709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.359 ms 00:20:11.964 [2024-04-24 20:26:42.124724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:42.137460] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:11.964 [2024-04-24 20:26:42.140728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.964 [2024-04-24 20:26:42.140766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:11.964 [2024-04-24 20:26:42.140790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.945 ms 00:20:11.964 [2024-04-24 20:26:42.140806] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:42.140965] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.964 [2024-04-24 20:26:42.140988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:11.964 [2024-04-24 20:26:42.141007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:11.964 [2024-04-24 20:26:42.141022] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:42.141126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.964 [2024-04-24 20:26:42.141146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:11.964 [2024-04-24 20:26:42.141163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:11.964 [2024-04-24 20:26:42.141177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:42.143283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.964 [2024-04-24 20:26:42.143320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:11.964 [2024-04-24 20:26:42.143341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.075 ms 00:20:11.964 [2024-04-24 20:26:42.143355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:42.143404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.964 [2024-04-24 20:26:42.143433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:11.964 [2024-04-24 20:26:42.143449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:11.964 [2024-04-24 20:26:42.143465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:42.143516] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:11.964 [2024-04-24 20:26:42.143535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.964 [2024-04-24 20:26:42.143550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:11.964 [2024-04-24 20:26:42.143571] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:11.964 [2024-04-24 20:26:42.143586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:42.181270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.964 [2024-04-24 20:26:42.181311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:11.964 [2024-04-24 20:26:42.181326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.711 ms 00:20:11.964 [2024-04-24 20:26:42.181336] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:42.181417] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:11.964 [2024-04-24 20:26:42.181430] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:11.964 [2024-04-24 20:26:42.181441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:11.964 [2024-04-24 20:26:42.181451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:11.964 [2024-04-24 20:26:42.182589] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 393.183 ms, result 0 00:20:45.267  Copying: 28/1024 [MB] (28 MBps) Copying: 60/1024 [MB] (31 MBps) Copying: 91/1024 [MB] (31 MBps) Copying: 123/1024 [MB] (31 MBps) Copying: 155/1024 [MB] (32 MBps) Copying: 186/1024 [MB] (31 MBps) Copying: 217/1024 [MB] (30 MBps) Copying: 250/1024 [MB] (32 MBps) Copying: 280/1024 [MB] (30 MBps) Copying: 313/1024 [MB] (32 MBps) Copying: 341/1024 [MB] (28 MBps) Copying: 370/1024 [MB] (29 MBps) Copying: 402/1024 [MB] (32 MBps) Copying: 434/1024 [MB] (31 MBps) Copying: 465/1024 [MB] (31 MBps) Copying: 499/1024 [MB] (33 MBps) Copying: 531/1024 [MB] (32 MBps) Copying: 562/1024 [MB] (31 MBps) Copying: 593/1024 [MB] (30 MBps) Copying: 623/1024 [MB] (30 MBps) Copying: 657/1024 [MB] (33 MBps) Copying: 691/1024 [MB] (34 MBps) Copying: 723/1024 [MB] (31 MBps) Copying: 755/1024 [MB] (32 MBps) Copying: 786/1024 [MB] (31 MBps) Copying: 820/1024 [MB] (33 MBps) Copying: 852/1024 [MB] (32 MBps) Copying: 884/1024 [MB] (31 MBps) Copying: 915/1024 [MB] (31 MBps) Copying: 947/1024 [MB] (32 MBps) Copying: 980/1024 [MB] (32 MBps) Copying: 1012/1024 [MB] (31 MBps) Copying: 1024/1024 [MB] (average 31 MBps)[2024-04-24 20:27:15.350458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.267 [2024-04-24 20:27:15.351668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:45.267 [2024-04-24 20:27:15.351873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:45.267 [2024-04-24 20:27:15.351937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.267 [2024-04-24 20:27:15.352084] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:45.267 [2024-04-24 20:27:15.359157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.267 [2024-04-24 20:27:15.359351] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:45.267 [2024-04-24 20:27:15.359468] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.394 ms 00:20:45.267 [2024-04-24 20:27:15.359520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.267 [2024-04-24 20:27:15.359885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.267 [2024-04-24 20:27:15.359959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:45.267 [2024-04-24 20:27:15.360007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:20:45.267 [2024-04-24 20:27:15.360137] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.267 [2024-04-24 20:27:15.363371] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.267 [2024-04-24 20:27:15.363490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:45.267 [2024-04-24 20:27:15.363594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.187 ms 00:20:45.267 [2024-04-24 20:27:15.363634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.267 [2024-04-24 20:27:15.369885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.267 [2024-04-24 20:27:15.370030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:45.267 [2024-04-24 20:27:15.370109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.212 ms 00:20:45.267 [2024-04-24 20:27:15.370145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.267 [2024-04-24 20:27:15.408811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.267 [2024-04-24 20:27:15.408957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:45.267 [2024-04-24 20:27:15.409027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.650 ms 00:20:45.267 [2024-04-24 20:27:15.409062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.267 [2024-04-24 20:27:15.430964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.267 [2024-04-24 20:27:15.431111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:45.267 [2024-04-24 20:27:15.431206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.881 ms 00:20:45.267 [2024-04-24 20:27:15.431242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.267 [2024-04-24 20:27:15.431428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.267 [2024-04-24 20:27:15.431577] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:45.267 [2024-04-24 20:27:15.431686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:45.267 [2024-04-24 20:27:15.431722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.267 [2024-04-24 20:27:15.469403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.267 [2024-04-24 20:27:15.469531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:45.267 [2024-04-24 20:27:15.469598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.705 ms 00:20:45.267 [2024-04-24 20:27:15.469631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.528 [2024-04-24 20:27:15.506648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.528 [2024-04-24 20:27:15.506774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:45.528 [2024-04-24 20:27:15.506848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.022 ms 00:20:45.528 [2024-04-24 20:27:15.506899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.528 [2024-04-24 20:27:15.543739] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.528 [2024-04-24 20:27:15.543878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:45.528 [2024-04-24 20:27:15.543951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.811 ms 00:20:45.528 [2024-04-24 20:27:15.543984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.528 [2024-04-24 20:27:15.581645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.528 [2024-04-24 20:27:15.581839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:45.528 [2024-04-24 20:27:15.581875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.566 ms 00:20:45.528 [2024-04-24 20:27:15.581886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.528 [2024-04-24 20:27:15.581925] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:45.528 [2024-04-24 20:27:15.581942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.581954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.581966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.581977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.581988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.581999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:45.528 [2024-04-24 20:27:15.582678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.582999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.583010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:45.529 [2024-04-24 20:27:15.583029] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:45.529 [2024-04-24 20:27:15.583040] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c0c9aa20-1edc-4827-ab08-6b786b351665 00:20:45.529 [2024-04-24 20:27:15.583051] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:45.529 [2024-04-24 20:27:15.583069] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:45.529 [2024-04-24 20:27:15.583079] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:45.529 [2024-04-24 20:27:15.583101] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:45.529 [2024-04-24 20:27:15.583111] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:45.529 [2024-04-24 20:27:15.583121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:45.529 [2024-04-24 20:27:15.583132] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:45.529 [2024-04-24 20:27:15.583141] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:45.529 [2024-04-24 20:27:15.583150] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:45.529 [2024-04-24 20:27:15.583160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.529 [2024-04-24 20:27:15.583172] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:45.529 [2024-04-24 20:27:15.583183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.239 ms 00:20:45.529 [2024-04-24 20:27:15.583193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.529 [2024-04-24 20:27:15.603030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.529 [2024-04-24 20:27:15.603066] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:45.529 [2024-04-24 20:27:15.603079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.832 ms 00:20:45.529 [2024-04-24 20:27:15.603089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.529 [2024-04-24 20:27:15.603327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:45.529 [2024-04-24 20:27:15.603338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:45.529 [2024-04-24 20:27:15.603348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:20:45.529 [2024-04-24 20:27:15.603358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.529 [2024-04-24 20:27:15.658918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.529 [2024-04-24 20:27:15.658973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:45.529 [2024-04-24 20:27:15.658989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.529 [2024-04-24 20:27:15.658999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.529 [2024-04-24 20:27:15.659072] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.529 [2024-04-24 20:27:15.659083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:45.529 [2024-04-24 20:27:15.659094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.529 [2024-04-24 20:27:15.659104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.529 [2024-04-24 20:27:15.659180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.529 [2024-04-24 20:27:15.659192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:45.529 [2024-04-24 20:27:15.659203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.529 [2024-04-24 20:27:15.659213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.529 [2024-04-24 20:27:15.659229] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.529 [2024-04-24 20:27:15.659239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:45.529 [2024-04-24 20:27:15.659250] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.529 [2024-04-24 20:27:15.659259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.789 [2024-04-24 20:27:15.777697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.789 [2024-04-24 20:27:15.777755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:45.789 [2024-04-24 20:27:15.777771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.789 [2024-04-24 20:27:15.777781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.789 [2024-04-24 20:27:15.824924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.789 [2024-04-24 20:27:15.824974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:45.789 [2024-04-24 20:27:15.824989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.789 [2024-04-24 20:27:15.824999] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.789 [2024-04-24 20:27:15.825060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.789 [2024-04-24 20:27:15.825078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:45.789 [2024-04-24 20:27:15.825088] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.789 [2024-04-24 20:27:15.825098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.789 [2024-04-24 20:27:15.825133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.789 [2024-04-24 20:27:15.825144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:45.789 [2024-04-24 20:27:15.825153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.789 [2024-04-24 20:27:15.825163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.789 [2024-04-24 20:27:15.825270] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.789 [2024-04-24 20:27:15.825282] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:45.789 [2024-04-24 20:27:15.825297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.789 [2024-04-24 20:27:15.825307] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.789 [2024-04-24 20:27:15.825345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.789 [2024-04-24 20:27:15.825357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:45.789 [2024-04-24 20:27:15.825367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.789 [2024-04-24 20:27:15.825377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.789 [2024-04-24 20:27:15.825414] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.789 [2024-04-24 20:27:15.825424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:45.789 [2024-04-24 20:27:15.825438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.789 [2024-04-24 20:27:15.825448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.789 [2024-04-24 20:27:15.825491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:45.789 [2024-04-24 20:27:15.825502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:45.789 [2024-04-24 20:27:15.825512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:45.789 [2024-04-24 20:27:15.825522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:45.789 [2024-04-24 20:27:15.825637] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 475.944 ms, result 0 00:20:47.169 00:20:47.169 00:20:47.169 20:27:17 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:49.077 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:49.077 20:27:18 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:49.077 [2024-04-24 20:27:18.901625] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:20:49.077 [2024-04-24 20:27:18.901740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80241 ] 00:20:49.077 [2024-04-24 20:27:19.067941] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.077 [2024-04-24 20:27:19.300772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.649 [2024-04-24 20:27:19.706573] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:49.649 [2024-04-24 20:27:19.706632] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:49.649 [2024-04-24 20:27:19.860526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.649 [2024-04-24 20:27:19.860581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:49.649 [2024-04-24 20:27:19.860596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:49.649 [2024-04-24 20:27:19.860607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.649 [2024-04-24 20:27:19.860665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.649 [2024-04-24 20:27:19.860679] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:49.649 [2024-04-24 20:27:19.860690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:49.649 [2024-04-24 20:27:19.860699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.649 [2024-04-24 20:27:19.860720] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:49.649 [2024-04-24 20:27:19.861898] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:49.649 [2024-04-24 20:27:19.861931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.649 [2024-04-24 20:27:19.861943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:49.649 [2024-04-24 20:27:19.861953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:20:49.649 [2024-04-24 20:27:19.861964] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.649 [2024-04-24 20:27:19.863374] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:49.907 [2024-04-24 20:27:19.884134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.907 [2024-04-24 20:27:19.884178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:49.907 [2024-04-24 20:27:19.884201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.793 ms 00:20:49.907 [2024-04-24 20:27:19.884210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.907 [2024-04-24 20:27:19.884275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.907 [2024-04-24 20:27:19.884288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:49.907 [2024-04-24 20:27:19.884299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:49.907 [2024-04-24 20:27:19.884308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.907 [2024-04-24 20:27:19.891183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.907 [2024-04-24 20:27:19.891214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:49.907 [2024-04-24 20:27:19.891227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.813 ms 00:20:49.907 [2024-04-24 20:27:19.891237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.907 [2024-04-24 20:27:19.891327] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.907 [2024-04-24 20:27:19.891341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:49.907 [2024-04-24 20:27:19.891352] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:20:49.907 [2024-04-24 20:27:19.891362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.907 [2024-04-24 20:27:19.891402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.907 [2024-04-24 20:27:19.891416] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:49.907 [2024-04-24 20:27:19.891426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:49.907 [2024-04-24 20:27:19.891436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.907 [2024-04-24 20:27:19.891462] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:49.907 [2024-04-24 20:27:19.897260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.907 [2024-04-24 20:27:19.897291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:49.907 [2024-04-24 20:27:19.897304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.815 ms 00:20:49.907 [2024-04-24 20:27:19.897314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.907 [2024-04-24 20:27:19.897344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.907 [2024-04-24 20:27:19.897354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:49.907 [2024-04-24 20:27:19.897364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:49.907 [2024-04-24 20:27:19.897374] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.907 [2024-04-24 20:27:19.897423] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:49.907 [2024-04-24 20:27:19.897449] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:49.907 [2024-04-24 20:27:19.897484] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:49.907 [2024-04-24 20:27:19.897501] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:49.907 [2024-04-24 20:27:19.897566] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:49.907 [2024-04-24 20:27:19.897580] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:49.907 [2024-04-24 20:27:19.897592] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:49.907 [2024-04-24 20:27:19.897604] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:49.907 [2024-04-24 20:27:19.897616] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:49.907 [2024-04-24 20:27:19.897630] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:49.907 [2024-04-24 20:27:19.897640] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:49.907 [2024-04-24 20:27:19.897649] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:49.907 [2024-04-24 20:27:19.897659] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:49.907 [2024-04-24 20:27:19.897669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.907 [2024-04-24 20:27:19.897678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:49.907 [2024-04-24 20:27:19.897689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:20:49.907 [2024-04-24 20:27:19.897699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.907 [2024-04-24 20:27:19.897752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.907 [2024-04-24 20:27:19.897763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:49.907 [2024-04-24 20:27:19.897775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:49.907 [2024-04-24 20:27:19.897785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.907 [2024-04-24 20:27:19.897850] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:49.907 [2024-04-24 20:27:19.897878] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:49.907 [2024-04-24 20:27:19.897888] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:49.907 [2024-04-24 20:27:19.897898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.907 [2024-04-24 20:27:19.897908] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:49.907 [2024-04-24 20:27:19.897917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:49.907 [2024-04-24 20:27:19.897927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:49.907 [2024-04-24 20:27:19.897937] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:49.907 [2024-04-24 20:27:19.897946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:49.907 [2024-04-24 20:27:19.897958] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:49.907 [2024-04-24 20:27:19.897986] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:49.907 [2024-04-24 20:27:19.897995] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:49.907 [2024-04-24 20:27:19.898015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:49.907 [2024-04-24 20:27:19.898024] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:49.907 [2024-04-24 20:27:19.898033] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:49.907 [2024-04-24 20:27:19.898042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.907 [2024-04-24 20:27:19.898051] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:49.907 [2024-04-24 20:27:19.898060] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:49.907 [2024-04-24 20:27:19.898069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.907 [2024-04-24 20:27:19.898078] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:49.907 [2024-04-24 20:27:19.898088] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:49.907 [2024-04-24 20:27:19.898097] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:49.907 [2024-04-24 20:27:19.898106] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:49.907 [2024-04-24 20:27:19.898115] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:49.907 [2024-04-24 20:27:19.898124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:49.907 [2024-04-24 20:27:19.898139] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:49.907 [2024-04-24 20:27:19.898148] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:49.907 [2024-04-24 20:27:19.898157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:49.907 [2024-04-24 20:27:19.898165] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:49.907 [2024-04-24 20:27:19.898174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:49.907 [2024-04-24 20:27:19.898183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:49.907 [2024-04-24 20:27:19.898192] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:49.907 [2024-04-24 20:27:19.898201] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:49.908 [2024-04-24 20:27:19.898209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:49.908 [2024-04-24 20:27:19.898218] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:49.908 [2024-04-24 20:27:19.898227] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:49.908 [2024-04-24 20:27:19.898236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:49.908 [2024-04-24 20:27:19.898245] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:49.908 [2024-04-24 20:27:19.898254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:49.908 [2024-04-24 20:27:19.898263] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:49.908 [2024-04-24 20:27:19.898271] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:49.908 [2024-04-24 20:27:19.898282] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:49.908 [2024-04-24 20:27:19.898296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:49.908 [2024-04-24 20:27:19.898310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:49.908 [2024-04-24 20:27:19.898320] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:49.908 [2024-04-24 20:27:19.898329] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:49.908 [2024-04-24 20:27:19.898337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:49.908 [2024-04-24 20:27:19.898347] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:49.908 [2024-04-24 20:27:19.898356] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:49.908 [2024-04-24 20:27:19.898365] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:49.908 [2024-04-24 20:27:19.898375] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:49.908 [2024-04-24 20:27:19.898387] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:49.908 [2024-04-24 20:27:19.898398] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:49.908 [2024-04-24 20:27:19.898408] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:49.908 [2024-04-24 20:27:19.898418] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:49.908 [2024-04-24 20:27:19.898428] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:49.908 [2024-04-24 20:27:19.898439] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:49.908 [2024-04-24 20:27:19.898449] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:49.908 [2024-04-24 20:27:19.898459] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:49.908 [2024-04-24 20:27:19.898469] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:49.908 [2024-04-24 20:27:19.898479] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:49.908 [2024-04-24 20:27:19.898489] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:49.908 [2024-04-24 20:27:19.898499] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:49.908 [2024-04-24 20:27:19.898509] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:49.908 [2024-04-24 20:27:19.898519] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:49.908 [2024-04-24 20:27:19.898529] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:49.908 [2024-04-24 20:27:19.898540] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:49.908 [2024-04-24 20:27:19.898550] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:49.908 [2024-04-24 20:27:19.898561] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:49.908 [2024-04-24 20:27:19.898571] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:49.908 [2024-04-24 20:27:19.898580] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:49.908 [2024-04-24 20:27:19.898591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.908 [2024-04-24 20:27:19.898601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:49.908 [2024-04-24 20:27:19.898611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:20:49.908 [2024-04-24 20:27:19.898621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.908 [2024-04-24 20:27:19.923655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.908 [2024-04-24 20:27:19.923687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:49.908 [2024-04-24 20:27:19.923701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.032 ms 00:20:49.908 [2024-04-24 20:27:19.923711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.908 [2024-04-24 20:27:19.923789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.908 [2024-04-24 20:27:19.923804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:49.908 [2024-04-24 20:27:19.923815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:49.908 [2024-04-24 20:27:19.923825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.908 [2024-04-24 20:27:19.987901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.908 [2024-04-24 20:27:19.987939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.908 [2024-04-24 20:27:19.987953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.117 ms 00:20:49.908 [2024-04-24 20:27:19.987966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.908 [2024-04-24 20:27:19.988004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.908 [2024-04-24 20:27:19.988015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:49.908 [2024-04-24 20:27:19.988025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:49.908 [2024-04-24 20:27:19.988035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.908 [2024-04-24 20:27:19.988499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.908 [2024-04-24 20:27:19.988517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:49.908 [2024-04-24 20:27:19.988528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:20:49.908 [2024-04-24 20:27:19.988538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.908 [2024-04-24 20:27:19.988648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.908 [2024-04-24 20:27:19.988660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:49.908 [2024-04-24 20:27:19.988671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:49.908 [2024-04-24 20:27:19.988681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.908 [2024-04-24 20:27:20.011601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.908 [2024-04-24 20:27:20.011657] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:49.908 [2024-04-24 20:27:20.011673] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.933 ms 00:20:49.908 [2024-04-24 20:27:20.011684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.908 [2024-04-24 20:27:20.032837] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:49.908 [2024-04-24 20:27:20.032911] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:49.908 [2024-04-24 20:27:20.032929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.908 [2024-04-24 20:27:20.032941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:49.908 [2024-04-24 20:27:20.032955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.130 ms 00:20:49.908 [2024-04-24 20:27:20.032965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.908 [2024-04-24 20:27:20.064753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.908 [2024-04-24 20:27:20.064828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:49.908 [2024-04-24 20:27:20.064845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.765 ms 00:20:49.908 [2024-04-24 20:27:20.064868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.908 [2024-04-24 20:27:20.084204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.908 [2024-04-24 20:27:20.084252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:49.908 [2024-04-24 20:27:20.084266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.288 ms 00:20:49.908 [2024-04-24 20:27:20.084288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.908 [2024-04-24 20:27:20.102940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.908 [2024-04-24 20:27:20.102998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:49.908 [2024-04-24 20:27:20.103014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.637 ms 00:20:49.908 [2024-04-24 20:27:20.103024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.908 [2024-04-24 20:27:20.103555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.908 [2024-04-24 20:27:20.103570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:49.908 [2024-04-24 20:27:20.103581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:20:49.908 [2024-04-24 20:27:20.103592] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.167 [2024-04-24 20:27:20.197781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.167 [2024-04-24 20:27:20.197846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:50.167 [2024-04-24 20:27:20.197875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.320 ms 00:20:50.167 [2024-04-24 20:27:20.197886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.167 [2024-04-24 20:27:20.211358] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:50.167 [2024-04-24 20:27:20.214578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.167 [2024-04-24 20:27:20.214610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:50.167 [2024-04-24 20:27:20.214625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.634 ms 00:20:50.167 [2024-04-24 20:27:20.214635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.167 [2024-04-24 20:27:20.214734] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.168 [2024-04-24 20:27:20.214754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:50.168 [2024-04-24 20:27:20.214765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:50.168 [2024-04-24 20:27:20.214775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.168 [2024-04-24 20:27:20.214853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.168 [2024-04-24 20:27:20.214878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:50.168 [2024-04-24 20:27:20.214906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:50.168 [2024-04-24 20:27:20.214915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.168 [2024-04-24 20:27:20.217106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.168 [2024-04-24 20:27:20.217136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:50.168 [2024-04-24 20:27:20.217150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.176 ms 00:20:50.168 [2024-04-24 20:27:20.217159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.168 [2024-04-24 20:27:20.217190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.168 [2024-04-24 20:27:20.217202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:50.168 [2024-04-24 20:27:20.217212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:50.168 [2024-04-24 20:27:20.217222] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.168 [2024-04-24 20:27:20.217256] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:50.168 [2024-04-24 20:27:20.217268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.168 [2024-04-24 20:27:20.217278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:50.168 [2024-04-24 20:27:20.217288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:50.168 [2024-04-24 20:27:20.217301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.168 [2024-04-24 20:27:20.256576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.168 [2024-04-24 20:27:20.256617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:50.168 [2024-04-24 20:27:20.256631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.319 ms 00:20:50.168 [2024-04-24 20:27:20.256641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.168 [2024-04-24 20:27:20.256713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:50.168 [2024-04-24 20:27:20.256730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:50.168 [2024-04-24 20:27:20.256740] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:20:50.168 [2024-04-24 20:27:20.256749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:50.168 [2024-04-24 20:27:20.257905] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 397.581 ms, result 0 00:21:26.957  Copying: 31/1024 [MB] (31 MBps) Copying: 61/1024 [MB] (30 MBps) Copying: 91/1024 [MB] (30 MBps) Copying: 121/1024 [MB] (29 MBps) Copying: 149/1024 [MB] (28 MBps) Copying: 178/1024 [MB] (28 MBps) Copying: 208/1024 [MB] (29 MBps) Copying: 236/1024 [MB] (28 MBps) Copying: 262/1024 [MB] (26 MBps) Copying: 289/1024 [MB] (26 MBps) Copying: 316/1024 [MB] (27 MBps) Copying: 343/1024 [MB] (27 MBps) Copying: 370/1024 [MB] (26 MBps) Copying: 397/1024 [MB] (27 MBps) Copying: 424/1024 [MB] (26 MBps) Copying: 451/1024 [MB] (27 MBps) Copying: 479/1024 [MB] (27 MBps) Copying: 505/1024 [MB] (26 MBps) Copying: 532/1024 [MB] (27 MBps) Copying: 561/1024 [MB] (28 MBps) Copying: 587/1024 [MB] (26 MBps) Copying: 615/1024 [MB] (28 MBps) Copying: 645/1024 [MB] (29 MBps) Copying: 674/1024 [MB] (28 MBps) Copying: 703/1024 [MB] (29 MBps) Copying: 733/1024 [MB] (30 MBps) Copying: 764/1024 [MB] (30 MBps) Copying: 793/1024 [MB] (29 MBps) Copying: 823/1024 [MB] (29 MBps) Copying: 854/1024 [MB] (30 MBps) Copying: 882/1024 [MB] (28 MBps) Copying: 911/1024 [MB] (29 MBps) Copying: 942/1024 [MB] (30 MBps) Copying: 970/1024 [MB] (27 MBps) Copying: 998/1024 [MB] (28 MBps) Copying: 1023/1024 [MB] (25 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-04-24 20:27:56.983111] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.957 [2024-04-24 20:27:56.983182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:26.957 [2024-04-24 20:27:56.983200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:26.957 [2024-04-24 20:27:56.983211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.957 [2024-04-24 20:27:56.985040] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:26.957 [2024-04-24 20:27:56.991869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.957 [2024-04-24 20:27:56.991909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:26.957 [2024-04-24 20:27:56.991924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.797 ms 00:21:26.957 [2024-04-24 20:27:56.991935] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.957 [2024-04-24 20:27:57.002400] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.957 [2024-04-24 20:27:57.002443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:26.957 [2024-04-24 20:27:57.002458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.496 ms 00:21:26.957 [2024-04-24 20:27:57.002469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.957 [2024-04-24 20:27:57.020882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.957 [2024-04-24 20:27:57.020928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:26.957 [2024-04-24 20:27:57.020955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.423 ms 00:21:26.957 [2024-04-24 20:27:57.020968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.957 [2024-04-24 20:27:57.026292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.957 [2024-04-24 20:27:57.026327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:26.957 [2024-04-24 20:27:57.026338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.296 ms 00:21:26.957 [2024-04-24 20:27:57.026349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.957 [2024-04-24 20:27:57.066265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.957 [2024-04-24 20:27:57.066323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:26.957 [2024-04-24 20:27:57.066339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.939 ms 00:21:26.957 [2024-04-24 20:27:57.066349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.957 [2024-04-24 20:27:57.087650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.957 [2024-04-24 20:27:57.087697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:26.957 [2024-04-24 20:27:57.087713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.282 ms 00:21:26.957 [2024-04-24 20:27:57.087732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.957 [2024-04-24 20:27:57.189805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.957 [2024-04-24 20:27:57.189911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:26.957 [2024-04-24 20:27:57.189930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.186 ms 00:21:26.957 [2024-04-24 20:27:57.189942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.217 [2024-04-24 20:27:57.229464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.217 [2024-04-24 20:27:57.229518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:27.217 [2024-04-24 20:27:57.229534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.564 ms 00:21:27.217 [2024-04-24 20:27:57.229544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.217 [2024-04-24 20:27:57.269452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.217 [2024-04-24 20:27:57.269509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:27.217 [2024-04-24 20:27:57.269525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.930 ms 00:21:27.217 [2024-04-24 20:27:57.269535] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.217 [2024-04-24 20:27:57.308736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.217 [2024-04-24 20:27:57.308791] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:27.217 [2024-04-24 20:27:57.308807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.215 ms 00:21:27.217 [2024-04-24 20:27:57.308817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.217 [2024-04-24 20:27:57.347384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.217 [2024-04-24 20:27:57.347450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:27.217 [2024-04-24 20:27:57.347466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.525 ms 00:21:27.217 [2024-04-24 20:27:57.347475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.217 [2024-04-24 20:27:57.347518] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:27.217 [2024-04-24 20:27:57.347543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 113920 / 261120 wr_cnt: 1 state: open 00:21:27.217 [2024-04-24 20:27:57.347557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:27.217 [2024-04-24 20:27:57.347767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.347777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.347788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.347798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.347809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.347963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.347974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.347985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.347996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:27.218 [2024-04-24 20:27:57.348754] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:27.218 [2024-04-24 20:27:57.348764] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c0c9aa20-1edc-4827-ab08-6b786b351665 00:21:27.218 [2024-04-24 20:27:57.348774] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 113920 00:21:27.218 [2024-04-24 20:27:57.348784] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 114880 00:21:27.219 [2024-04-24 20:27:57.348793] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 113920 00:21:27.219 [2024-04-24 20:27:57.348816] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0084 00:21:27.219 [2024-04-24 20:27:57.348825] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:27.219 [2024-04-24 20:27:57.348837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:27.219 [2024-04-24 20:27:57.348847] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:27.219 [2024-04-24 20:27:57.348865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:27.219 [2024-04-24 20:27:57.348874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:27.219 [2024-04-24 20:27:57.348884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.219 [2024-04-24 20:27:57.348894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:27.219 [2024-04-24 20:27:57.348908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:21:27.219 [2024-04-24 20:27:57.348917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.219 [2024-04-24 20:27:57.370115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.219 [2024-04-24 20:27:57.370175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:27.219 [2024-04-24 20:27:57.370197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.192 ms 00:21:27.219 [2024-04-24 20:27:57.370212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.219 [2024-04-24 20:27:57.370506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.219 [2024-04-24 20:27:57.370524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:27.219 [2024-04-24 20:27:57.370540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:21:27.219 [2024-04-24 20:27:57.370554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.219 [2024-04-24 20:27:57.424440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.219 [2024-04-24 20:27:57.424502] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:27.219 [2024-04-24 20:27:57.424518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.219 [2024-04-24 20:27:57.424529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.219 [2024-04-24 20:27:57.424611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.219 [2024-04-24 20:27:57.424622] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:27.219 [2024-04-24 20:27:57.424634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.219 [2024-04-24 20:27:57.424643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.219 [2024-04-24 20:27:57.424718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.219 [2024-04-24 20:27:57.424730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:27.219 [2024-04-24 20:27:57.424741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.219 [2024-04-24 20:27:57.424750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.219 [2024-04-24 20:27:57.424767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.219 [2024-04-24 20:27:57.424782] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:27.219 [2024-04-24 20:27:57.424792] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.219 [2024-04-24 20:27:57.424802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.478 [2024-04-24 20:27:57.546773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.478 [2024-04-24 20:27:57.546824] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:27.478 [2024-04-24 20:27:57.546847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.478 [2024-04-24 20:27:57.546870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.478 [2024-04-24 20:27:57.596533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.478 [2024-04-24 20:27:57.596589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:27.478 [2024-04-24 20:27:57.596604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.478 [2024-04-24 20:27:57.596615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.478 [2024-04-24 20:27:57.596695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.478 [2024-04-24 20:27:57.596707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:27.478 [2024-04-24 20:27:57.596718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.478 [2024-04-24 20:27:57.596728] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.478 [2024-04-24 20:27:57.596765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.478 [2024-04-24 20:27:57.596777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:27.478 [2024-04-24 20:27:57.596795] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.478 [2024-04-24 20:27:57.596805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.478 [2024-04-24 20:27:57.596954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.478 [2024-04-24 20:27:57.596968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:27.478 [2024-04-24 20:27:57.596978] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.478 [2024-04-24 20:27:57.596987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.478 [2024-04-24 20:27:57.597021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.478 [2024-04-24 20:27:57.597033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:27.478 [2024-04-24 20:27:57.597043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.478 [2024-04-24 20:27:57.597057] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.478 [2024-04-24 20:27:57.597093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.478 [2024-04-24 20:27:57.597103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:27.478 [2024-04-24 20:27:57.597113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.478 [2024-04-24 20:27:57.597140] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.478 [2024-04-24 20:27:57.597195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:27.478 [2024-04-24 20:27:57.597206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:27.478 [2024-04-24 20:27:57.597219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:27.478 [2024-04-24 20:27:57.597229] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.478 [2024-04-24 20:27:57.597341] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 616.716 ms, result 0 00:21:29.386 00:21:29.386 00:21:29.386 20:27:59 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:29.386 [2024-04-24 20:27:59.456019] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:21:29.386 [2024-04-24 20:27:59.456182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80653 ] 00:21:29.645 [2024-04-24 20:27:59.639504] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.903 [2024-04-24 20:27:59.881453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.215 [2024-04-24 20:28:00.293548] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:30.215 [2024-04-24 20:28:00.293614] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:30.477 [2024-04-24 20:28:00.454821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.477 [2024-04-24 20:28:00.454933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:30.477 [2024-04-24 20:28:00.454955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:30.477 [2024-04-24 20:28:00.454970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.477 [2024-04-24 20:28:00.455071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.477 [2024-04-24 20:28:00.455090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:30.477 [2024-04-24 20:28:00.455105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:30.477 [2024-04-24 20:28:00.455119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.477 [2024-04-24 20:28:00.455149] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:30.477 [2024-04-24 20:28:00.456231] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:30.477 [2024-04-24 20:28:00.456285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.477 [2024-04-24 20:28:00.456302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:30.477 [2024-04-24 20:28:00.456318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.143 ms 00:21:30.477 [2024-04-24 20:28:00.456333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.477 [2024-04-24 20:28:00.458169] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:30.477 [2024-04-24 20:28:00.476802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.478 [2024-04-24 20:28:00.476844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:30.478 [2024-04-24 20:28:00.476873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.664 ms 00:21:30.478 [2024-04-24 20:28:00.476884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.478 [2024-04-24 20:28:00.476945] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.478 [2024-04-24 20:28:00.476957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:30.478 [2024-04-24 20:28:00.476968] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:30.478 [2024-04-24 20:28:00.476978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.478 [2024-04-24 20:28:00.483668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.478 [2024-04-24 20:28:00.483696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:30.478 [2024-04-24 20:28:00.483708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.629 ms 00:21:30.478 [2024-04-24 20:28:00.483718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.478 [2024-04-24 20:28:00.483807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.478 [2024-04-24 20:28:00.483821] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:30.478 [2024-04-24 20:28:00.483830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:30.478 [2024-04-24 20:28:00.483840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.478 [2024-04-24 20:28:00.483896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.478 [2024-04-24 20:28:00.483911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:30.478 [2024-04-24 20:28:00.483921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:30.478 [2024-04-24 20:28:00.483931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.478 [2024-04-24 20:28:00.483957] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:30.478 [2024-04-24 20:28:00.489594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.478 [2024-04-24 20:28:00.489624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:30.478 [2024-04-24 20:28:00.489635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.653 ms 00:21:30.478 [2024-04-24 20:28:00.489645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.478 [2024-04-24 20:28:00.489675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.478 [2024-04-24 20:28:00.489685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:30.478 [2024-04-24 20:28:00.489695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:30.478 [2024-04-24 20:28:00.489705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.478 [2024-04-24 20:28:00.489753] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:30.478 [2024-04-24 20:28:00.489780] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:30.478 [2024-04-24 20:28:00.489812] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:30.478 [2024-04-24 20:28:00.489828] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:30.478 [2024-04-24 20:28:00.489904] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:30.478 [2024-04-24 20:28:00.489917] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:30.478 [2024-04-24 20:28:00.489929] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:30.478 [2024-04-24 20:28:00.489941] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:30.478 [2024-04-24 20:28:00.489952] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:30.478 [2024-04-24 20:28:00.489967] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:30.478 [2024-04-24 20:28:00.489976] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:30.478 [2024-04-24 20:28:00.489985] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:30.478 [2024-04-24 20:28:00.489995] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:30.478 [2024-04-24 20:28:00.490005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.478 [2024-04-24 20:28:00.490015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:30.478 [2024-04-24 20:28:00.490025] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:21:30.478 [2024-04-24 20:28:00.490034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.478 [2024-04-24 20:28:00.490088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.478 [2024-04-24 20:28:00.490098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:30.478 [2024-04-24 20:28:00.490111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:30.478 [2024-04-24 20:28:00.490120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.478 [2024-04-24 20:28:00.490184] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:30.478 [2024-04-24 20:28:00.490196] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:30.478 [2024-04-24 20:28:00.490206] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:30.478 [2024-04-24 20:28:00.490215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.478 [2024-04-24 20:28:00.490225] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:30.478 [2024-04-24 20:28:00.490234] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:30.478 [2024-04-24 20:28:00.490243] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:30.478 [2024-04-24 20:28:00.490255] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:30.478 [2024-04-24 20:28:00.490264] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:30.478 [2024-04-24 20:28:00.490273] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:30.478 [2024-04-24 20:28:00.490282] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:30.478 [2024-04-24 20:28:00.490290] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:30.478 [2024-04-24 20:28:00.490310] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:30.478 [2024-04-24 20:28:00.490319] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:30.478 [2024-04-24 20:28:00.490328] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:30.478 [2024-04-24 20:28:00.490337] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.478 [2024-04-24 20:28:00.490346] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:30.478 [2024-04-24 20:28:00.490355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:30.478 [2024-04-24 20:28:00.490363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.478 [2024-04-24 20:28:00.490372] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:30.478 [2024-04-24 20:28:00.490381] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:30.478 [2024-04-24 20:28:00.490390] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:30.478 [2024-04-24 20:28:00.490399] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:30.478 [2024-04-24 20:28:00.490408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:30.478 [2024-04-24 20:28:00.490417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:30.478 [2024-04-24 20:28:00.490426] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:30.478 [2024-04-24 20:28:00.490435] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:30.478 [2024-04-24 20:28:00.490443] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:30.478 [2024-04-24 20:28:00.490452] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:30.478 [2024-04-24 20:28:00.490461] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:30.478 [2024-04-24 20:28:00.490469] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:30.478 [2024-04-24 20:28:00.490478] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:30.478 [2024-04-24 20:28:00.490487] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:30.478 [2024-04-24 20:28:00.490496] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:30.478 [2024-04-24 20:28:00.490504] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:30.478 [2024-04-24 20:28:00.490513] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:30.478 [2024-04-24 20:28:00.490521] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:30.478 [2024-04-24 20:28:00.490530] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:30.478 [2024-04-24 20:28:00.490539] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:30.478 [2024-04-24 20:28:00.490548] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:30.478 [2024-04-24 20:28:00.490557] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:30.478 [2024-04-24 20:28:00.490566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:30.478 [2024-04-24 20:28:00.490579] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:30.478 [2024-04-24 20:28:00.490593] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:30.478 [2024-04-24 20:28:00.490603] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:30.478 [2024-04-24 20:28:00.490612] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:30.478 [2024-04-24 20:28:00.490621] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:30.478 [2024-04-24 20:28:00.490630] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:30.478 [2024-04-24 20:28:00.490639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:30.478 [2024-04-24 20:28:00.490648] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:30.478 [2024-04-24 20:28:00.490657] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:30.478 [2024-04-24 20:28:00.490669] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:30.478 [2024-04-24 20:28:00.490679] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:30.478 [2024-04-24 20:28:00.490689] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:30.478 [2024-04-24 20:28:00.490699] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:30.479 [2024-04-24 20:28:00.490710] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:30.479 [2024-04-24 20:28:00.490720] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:30.479 [2024-04-24 20:28:00.490730] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:30.479 [2024-04-24 20:28:00.490739] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:30.479 [2024-04-24 20:28:00.490749] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:30.479 [2024-04-24 20:28:00.490759] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:30.479 [2024-04-24 20:28:00.490769] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:30.479 [2024-04-24 20:28:00.490779] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:30.479 [2024-04-24 20:28:00.490789] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:30.479 [2024-04-24 20:28:00.490799] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:30.479 [2024-04-24 20:28:00.490809] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:30.479 [2024-04-24 20:28:00.490820] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:30.479 [2024-04-24 20:28:00.490831] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:30.479 [2024-04-24 20:28:00.490849] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:30.479 [2024-04-24 20:28:00.490868] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:30.479 [2024-04-24 20:28:00.490878] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:30.479 [2024-04-24 20:28:00.490892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.479 [2024-04-24 20:28:00.490902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:30.479 [2024-04-24 20:28:00.490911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.743 ms 00:21:30.479 [2024-04-24 20:28:00.490920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.479 [2024-04-24 20:28:00.515151] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.479 [2024-04-24 20:28:00.515186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:30.479 [2024-04-24 20:28:00.515200] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.229 ms 00:21:30.479 [2024-04-24 20:28:00.515212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.479 [2024-04-24 20:28:00.515293] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.479 [2024-04-24 20:28:00.515307] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:30.479 [2024-04-24 20:28:00.515318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:30.479 [2024-04-24 20:28:00.515328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.479 [2024-04-24 20:28:00.584846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.479 [2024-04-24 20:28:00.584899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:30.479 [2024-04-24 20:28:00.584914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.578 ms 00:21:30.479 [2024-04-24 20:28:00.584927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.479 [2024-04-24 20:28:00.584984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.479 [2024-04-24 20:28:00.584995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:30.479 [2024-04-24 20:28:00.585006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:30.479 [2024-04-24 20:28:00.585016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.479 [2024-04-24 20:28:00.585483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.479 [2024-04-24 20:28:00.585496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:30.479 [2024-04-24 20:28:00.585507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:21:30.479 [2024-04-24 20:28:00.585516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.479 [2024-04-24 20:28:00.585626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.479 [2024-04-24 20:28:00.585640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:30.479 [2024-04-24 20:28:00.585657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:21:30.479 [2024-04-24 20:28:00.585667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.479 [2024-04-24 20:28:00.607651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.479 [2024-04-24 20:28:00.607696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:30.479 [2024-04-24 20:28:00.607711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.998 ms 00:21:30.479 [2024-04-24 20:28:00.607721] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.479 [2024-04-24 20:28:00.627930] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:30.479 [2024-04-24 20:28:00.627970] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:30.479 [2024-04-24 20:28:00.627984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.479 [2024-04-24 20:28:00.627995] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:30.479 [2024-04-24 20:28:00.628006] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.168 ms 00:21:30.479 [2024-04-24 20:28:00.628016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.479 [2024-04-24 20:28:00.658831] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.479 [2024-04-24 20:28:00.658895] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:30.479 [2024-04-24 20:28:00.658910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.821 ms 00:21:30.479 [2024-04-24 20:28:00.658921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.479 [2024-04-24 20:28:00.679036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.479 [2024-04-24 20:28:00.679077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:30.479 [2024-04-24 20:28:00.679091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.092 ms 00:21:30.479 [2024-04-24 20:28:00.679101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.479 [2024-04-24 20:28:00.698074] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.479 [2024-04-24 20:28:00.698109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:30.479 [2024-04-24 20:28:00.698122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.951 ms 00:21:30.479 [2024-04-24 20:28:00.698132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.479 [2024-04-24 20:28:00.698610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.479 [2024-04-24 20:28:00.698625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:30.479 [2024-04-24 20:28:00.698636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:21:30.479 [2024-04-24 20:28:00.698645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.738 [2024-04-24 20:28:00.787392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.738 [2024-04-24 20:28:00.787457] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:30.738 [2024-04-24 20:28:00.787474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.872 ms 00:21:30.738 [2024-04-24 20:28:00.787484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.738 [2024-04-24 20:28:00.800034] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:30.738 [2024-04-24 20:28:00.803152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.738 [2024-04-24 20:28:00.803182] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:30.738 [2024-04-24 20:28:00.803195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.633 ms 00:21:30.738 [2024-04-24 20:28:00.803205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.738 [2024-04-24 20:28:00.803299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.738 [2024-04-24 20:28:00.803318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:30.738 [2024-04-24 20:28:00.803329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:30.738 [2024-04-24 20:28:00.803339] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.738 [2024-04-24 20:28:00.804620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.738 [2024-04-24 20:28:00.804653] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:30.738 [2024-04-24 20:28:00.804664] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.244 ms 00:21:30.738 [2024-04-24 20:28:00.804674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.738 [2024-04-24 20:28:00.806762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.738 [2024-04-24 20:28:00.806789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:30.738 [2024-04-24 20:28:00.806802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:21:30.738 [2024-04-24 20:28:00.806812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.738 [2024-04-24 20:28:00.806852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.738 [2024-04-24 20:28:00.806872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:30.738 [2024-04-24 20:28:00.806882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:30.738 [2024-04-24 20:28:00.806891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.738 [2024-04-24 20:28:00.806924] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:30.738 [2024-04-24 20:28:00.806936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.738 [2024-04-24 20:28:00.806946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:30.738 [2024-04-24 20:28:00.806955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:30.738 [2024-04-24 20:28:00.806968] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.738 [2024-04-24 20:28:00.843227] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.738 [2024-04-24 20:28:00.843266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:30.739 [2024-04-24 20:28:00.843280] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.300 ms 00:21:30.739 [2024-04-24 20:28:00.843291] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.739 [2024-04-24 20:28:00.843361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:30.739 [2024-04-24 20:28:00.843378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:30.739 [2024-04-24 20:28:00.843389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:30.739 [2024-04-24 20:28:00.843399] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:30.739 [2024-04-24 20:28:00.848751] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 393.329 ms, result 0 00:22:02.441  Copying: 30/1024 [MB] (30 MBps) Copying: 62/1024 [MB] (32 MBps) Copying: 93/1024 [MB] (31 MBps) Copying: 126/1024 [MB] (32 MBps) Copying: 158/1024 [MB] (32 MBps) Copying: 191/1024 [MB] (32 MBps) Copying: 224/1024 [MB] (33 MBps) Copying: 255/1024 [MB] (30 MBps) Copying: 285/1024 [MB] (30 MBps) Copying: 317/1024 [MB] (31 MBps) Copying: 347/1024 [MB] (30 MBps) Copying: 377/1024 [MB] (29 MBps) Copying: 406/1024 [MB] (29 MBps) Copying: 439/1024 [MB] (32 MBps) Copying: 470/1024 [MB] (31 MBps) Copying: 504/1024 [MB] (34 MBps) Copying: 538/1024 [MB] (34 MBps) Copying: 572/1024 [MB] (34 MBps) Copying: 609/1024 [MB] (36 MBps) Copying: 644/1024 [MB] (35 MBps) Copying: 678/1024 [MB] (33 MBps) Copying: 712/1024 [MB] (34 MBps) Copying: 746/1024 [MB] (33 MBps) Copying: 778/1024 [MB] (31 MBps) Copying: 811/1024 [MB] (32 MBps) Copying: 845/1024 [MB] (34 MBps) Copying: 882/1024 [MB] (36 MBps) Copying: 916/1024 [MB] (34 MBps) Copying: 951/1024 [MB] (34 MBps) Copying: 986/1024 [MB] (35 MBps) Copying: 1021/1024 [MB] (35 MBps) Copying: 1024/1024 [MB] (average 33 MBps)[2024-04-24 20:28:32.566736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.441 [2024-04-24 20:28:32.567077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:02.441 [2024-04-24 20:28:32.567198] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:02.441 [2024-04-24 20:28:32.567240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.441 [2024-04-24 20:28:32.567350] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:02.441 [2024-04-24 20:28:32.572650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.442 [2024-04-24 20:28:32.572830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:02.442 [2024-04-24 20:28:32.572948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.174 ms 00:22:02.442 [2024-04-24 20:28:32.572967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.442 [2024-04-24 20:28:32.573197] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.442 [2024-04-24 20:28:32.573218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:02.442 [2024-04-24 20:28:32.573393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:22:02.442 [2024-04-24 20:28:32.573405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.442 [2024-04-24 20:28:32.577601] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.442 [2024-04-24 20:28:32.577647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:02.442 [2024-04-24 20:28:32.577670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.184 ms 00:22:02.442 [2024-04-24 20:28:32.577684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.442 [2024-04-24 20:28:32.583934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.442 [2024-04-24 20:28:32.583979] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:02.442 [2024-04-24 20:28:32.583994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.173 ms 00:22:02.442 [2024-04-24 20:28:32.584005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.442 [2024-04-24 20:28:32.627381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.442 [2024-04-24 20:28:32.627433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:02.442 [2024-04-24 20:28:32.627464] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.402 ms 00:22:02.442 [2024-04-24 20:28:32.627476] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.442 [2024-04-24 20:28:32.649888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.442 [2024-04-24 20:28:32.649934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:02.442 [2024-04-24 20:28:32.649949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.400 ms 00:22:02.442 [2024-04-24 20:28:32.649966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-04-24 20:28:32.754909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.702 [2024-04-24 20:28:32.754977] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:02.702 [2024-04-24 20:28:32.755002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.063 ms 00:22:02.702 [2024-04-24 20:28:32.755020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-04-24 20:28:32.795735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.702 [2024-04-24 20:28:32.795785] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:02.702 [2024-04-24 20:28:32.795800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.758 ms 00:22:02.702 [2024-04-24 20:28:32.795811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-04-24 20:28:32.835458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.702 [2024-04-24 20:28:32.835504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:02.702 [2024-04-24 20:28:32.835518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.606 ms 00:22:02.702 [2024-04-24 20:28:32.835528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-04-24 20:28:32.874001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.702 [2024-04-24 20:28:32.874047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:02.702 [2024-04-24 20:28:32.874062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.455 ms 00:22:02.702 [2024-04-24 20:28:32.874072] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.702 [2024-04-24 20:28:32.912800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.702 [2024-04-24 20:28:32.912868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:02.702 [2024-04-24 20:28:32.912885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.669 ms 00:22:02.702 [2024-04-24 20:28:32.912896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.703 [2024-04-24 20:28:32.912938] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:02.703 [2024-04-24 20:28:32.912963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:22:02.703 [2024-04-24 20:28:32.912977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.912988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:02.703 [2024-04-24 20:28:32.913839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.913850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.913870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.913881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.913893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.913905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.913916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.913927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.913938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.913949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.913960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.913971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.913982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.913994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.914005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.914016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.914028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.914039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.914050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.914061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.914072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.914084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:02.704 [2024-04-24 20:28:32.914103] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:02.704 [2024-04-24 20:28:32.914113] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c0c9aa20-1edc-4827-ab08-6b786b351665 00:22:02.704 [2024-04-24 20:28:32.914124] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:22:02.704 [2024-04-24 20:28:32.914134] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 20672 00:22:02.704 [2024-04-24 20:28:32.914144] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 19712 00:22:02.704 [2024-04-24 20:28:32.914155] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0487 00:22:02.704 [2024-04-24 20:28:32.914177] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:02.704 [2024-04-24 20:28:32.914188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:02.704 [2024-04-24 20:28:32.914199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:02.704 [2024-04-24 20:28:32.914208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:02.704 [2024-04-24 20:28:32.914218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:02.704 [2024-04-24 20:28:32.914228] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.704 [2024-04-24 20:28:32.914238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:02.704 [2024-04-24 20:28:32.914253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.293 ms 00:22:02.704 [2024-04-24 20:28:32.914263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.704 [2024-04-24 20:28:32.935294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.704 [2024-04-24 20:28:32.935335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:02.704 [2024-04-24 20:28:32.935350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.026 ms 00:22:02.704 [2024-04-24 20:28:32.935362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.963 [2024-04-24 20:28:32.935609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:02.963 [2024-04-24 20:28:32.935621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:02.963 [2024-04-24 20:28:32.935633] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:22:02.963 [2024-04-24 20:28:32.935643] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.963 [2024-04-24 20:28:32.992990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.964 [2024-04-24 20:28:32.993043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:02.964 [2024-04-24 20:28:32.993058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.964 [2024-04-24 20:28:32.993069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.964 [2024-04-24 20:28:32.993140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.964 [2024-04-24 20:28:32.993152] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:02.964 [2024-04-24 20:28:32.993163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.964 [2024-04-24 20:28:32.993174] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.964 [2024-04-24 20:28:32.993245] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.964 [2024-04-24 20:28:32.993259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:02.964 [2024-04-24 20:28:32.993270] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.964 [2024-04-24 20:28:32.993280] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.964 [2024-04-24 20:28:32.993299] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.964 [2024-04-24 20:28:32.993314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:02.964 [2024-04-24 20:28:32.993325] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.964 [2024-04-24 20:28:32.993335] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.964 [2024-04-24 20:28:33.116012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.964 [2024-04-24 20:28:33.116080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:02.964 [2024-04-24 20:28:33.116096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.964 [2024-04-24 20:28:33.116108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.964 [2024-04-24 20:28:33.165071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.964 [2024-04-24 20:28:33.165132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:02.964 [2024-04-24 20:28:33.165147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.964 [2024-04-24 20:28:33.165157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.964 [2024-04-24 20:28:33.165224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.964 [2024-04-24 20:28:33.165235] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:02.964 [2024-04-24 20:28:33.165246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.964 [2024-04-24 20:28:33.165256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.964 [2024-04-24 20:28:33.165291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.964 [2024-04-24 20:28:33.165302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:02.964 [2024-04-24 20:28:33.165318] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.964 [2024-04-24 20:28:33.165328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.964 [2024-04-24 20:28:33.165432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.964 [2024-04-24 20:28:33.165445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:02.964 [2024-04-24 20:28:33.165455] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.964 [2024-04-24 20:28:33.165465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.964 [2024-04-24 20:28:33.165499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.964 [2024-04-24 20:28:33.165510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:02.964 [2024-04-24 20:28:33.165520] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.964 [2024-04-24 20:28:33.165534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.964 [2024-04-24 20:28:33.165588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.964 [2024-04-24 20:28:33.165599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:02.964 [2024-04-24 20:28:33.165610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.964 [2024-04-24 20:28:33.165620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.964 [2024-04-24 20:28:33.165663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:02.964 [2024-04-24 20:28:33.165675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:02.964 [2024-04-24 20:28:33.165688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:02.964 [2024-04-24 20:28:33.165698] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:02.964 [2024-04-24 20:28:33.165815] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 600.024 ms, result 0 00:22:04.344 00:22:04.344 00:22:04.344 20:28:34 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:06.254 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:06.254 20:28:36 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:06.254 20:28:36 -- ftl/restore.sh@85 -- # restore_kill 00:22:06.254 20:28:36 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:06.254 20:28:36 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:06.254 20:28:36 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:06.512 20:28:36 -- ftl/restore.sh@32 -- # killprocess 79236 00:22:06.512 20:28:36 -- common/autotest_common.sh@936 -- # '[' -z 79236 ']' 00:22:06.512 20:28:36 -- common/autotest_common.sh@940 -- # kill -0 79236 00:22:06.512 Process with pid 79236 is not found 00:22:06.512 Remove shared memory files 00:22:06.512 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (79236) - No such process 00:22:06.512 20:28:36 -- common/autotest_common.sh@963 -- # echo 'Process with pid 79236 is not found' 00:22:06.512 20:28:36 -- ftl/restore.sh@33 -- # remove_shm 00:22:06.512 20:28:36 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:06.512 20:28:36 -- ftl/common.sh@205 -- # rm -f rm -f 00:22:06.512 20:28:36 -- ftl/common.sh@206 -- # rm -f rm -f 00:22:06.513 20:28:36 -- ftl/common.sh@207 -- # rm -f rm -f 00:22:06.513 20:28:36 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:06.513 20:28:36 -- ftl/common.sh@209 -- # rm -f rm -f 00:22:06.513 00:22:06.513 real 2m52.937s 00:22:06.513 user 2m40.673s 00:22:06.513 sys 0m13.298s 00:22:06.513 20:28:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:06.513 ************************************ 00:22:06.513 END TEST ftl_restore 00:22:06.513 ************************************ 00:22:06.513 20:28:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.513 20:28:36 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:06.513 20:28:36 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:06.513 20:28:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:06.513 20:28:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.513 ************************************ 00:22:06.513 START TEST ftl_dirty_shutdown 00:22:06.513 ************************************ 00:22:06.513 20:28:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:06.513 * Looking for test storage... 00:22:06.513 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:06.513 20:28:36 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:06.513 20:28:36 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:06.778 20:28:36 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:06.778 20:28:36 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:06.778 20:28:36 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:06.778 20:28:36 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:06.778 20:28:36 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:06.778 20:28:36 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:06.778 20:28:36 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:06.778 20:28:36 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:06.778 20:28:36 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:06.778 20:28:36 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:06.778 20:28:36 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:06.778 20:28:36 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:06.778 20:28:36 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:06.778 20:28:36 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:06.778 20:28:36 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:06.778 20:28:36 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:06.778 20:28:36 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:06.778 20:28:36 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:06.778 20:28:36 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:06.778 20:28:36 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:06.778 20:28:36 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:06.778 20:28:36 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:06.778 20:28:36 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:06.778 20:28:36 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:06.778 20:28:36 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:06.778 20:28:36 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:06.778 20:28:36 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@45 -- # svcpid=81096 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81096 00:22:06.778 20:28:36 -- common/autotest_common.sh@817 -- # '[' -z 81096 ']' 00:22:06.778 20:28:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.778 20:28:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:06.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.778 20:28:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.778 20:28:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:06.778 20:28:36 -- common/autotest_common.sh@10 -- # set +x 00:22:06.778 20:28:36 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:06.778 [2024-04-24 20:28:36.864570] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:22:06.778 [2024-04-24 20:28:36.864684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81096 ] 00:22:07.037 [2024-04-24 20:28:37.035154] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.037 [2024-04-24 20:28:37.269592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.435 20:28:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:08.435 20:28:38 -- common/autotest_common.sh@850 -- # return 0 00:22:08.435 20:28:38 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:08.435 20:28:38 -- ftl/common.sh@54 -- # local name=nvme0 00:22:08.435 20:28:38 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:08.435 20:28:38 -- ftl/common.sh@56 -- # local size=103424 00:22:08.435 20:28:38 -- ftl/common.sh@59 -- # local base_bdev 00:22:08.435 20:28:38 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:08.435 20:28:38 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:08.435 20:28:38 -- ftl/common.sh@62 -- # local base_size 00:22:08.435 20:28:38 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:08.435 20:28:38 -- common/autotest_common.sh@1364 -- # local bdev_name=nvme0n1 00:22:08.435 20:28:38 -- common/autotest_common.sh@1365 -- # local bdev_info 00:22:08.435 20:28:38 -- common/autotest_common.sh@1366 -- # local bs 00:22:08.435 20:28:38 -- common/autotest_common.sh@1367 -- # local nb 00:22:08.435 20:28:38 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:08.708 20:28:38 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:22:08.708 { 00:22:08.708 "name": "nvme0n1", 00:22:08.708 "aliases": [ 00:22:08.708 "cf2dcce0-0b38-4e6d-9dcf-3c3606d77cc5" 00:22:08.708 ], 00:22:08.708 "product_name": "NVMe disk", 00:22:08.708 "block_size": 4096, 00:22:08.708 "num_blocks": 1310720, 00:22:08.708 "uuid": "cf2dcce0-0b38-4e6d-9dcf-3c3606d77cc5", 00:22:08.708 "assigned_rate_limits": { 00:22:08.708 "rw_ios_per_sec": 0, 00:22:08.708 "rw_mbytes_per_sec": 0, 00:22:08.708 "r_mbytes_per_sec": 0, 00:22:08.708 "w_mbytes_per_sec": 0 00:22:08.708 }, 00:22:08.708 "claimed": true, 00:22:08.708 "claim_type": "read_many_write_one", 00:22:08.708 "zoned": false, 00:22:08.708 "supported_io_types": { 00:22:08.708 "read": true, 00:22:08.708 "write": true, 00:22:08.708 "unmap": true, 00:22:08.708 "write_zeroes": true, 00:22:08.708 "flush": true, 00:22:08.708 "reset": true, 00:22:08.708 "compare": true, 00:22:08.708 "compare_and_write": false, 00:22:08.708 "abort": true, 00:22:08.708 "nvme_admin": true, 00:22:08.708 "nvme_io": true 00:22:08.708 }, 00:22:08.708 "driver_specific": { 00:22:08.708 "nvme": [ 00:22:08.708 { 00:22:08.708 "pci_address": "0000:00:11.0", 00:22:08.708 "trid": { 00:22:08.708 "trtype": "PCIe", 00:22:08.708 "traddr": "0000:00:11.0" 00:22:08.708 }, 00:22:08.708 "ctrlr_data": { 00:22:08.709 "cntlid": 0, 00:22:08.709 "vendor_id": "0x1b36", 00:22:08.709 "model_number": "QEMU NVMe Ctrl", 00:22:08.709 "serial_number": "12341", 00:22:08.709 "firmware_revision": "8.0.0", 00:22:08.709 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:08.709 "oacs": { 00:22:08.709 "security": 0, 00:22:08.709 "format": 1, 00:22:08.709 "firmware": 0, 00:22:08.709 "ns_manage": 1 00:22:08.709 }, 00:22:08.709 "multi_ctrlr": false, 00:22:08.709 "ana_reporting": false 00:22:08.709 }, 00:22:08.709 "vs": { 00:22:08.709 "nvme_version": "1.4" 00:22:08.709 }, 00:22:08.709 "ns_data": { 00:22:08.709 "id": 1, 00:22:08.709 "can_share": false 00:22:08.709 } 00:22:08.709 } 00:22:08.709 ], 00:22:08.709 "mp_policy": "active_passive" 00:22:08.709 } 00:22:08.709 } 00:22:08.709 ]' 00:22:08.709 20:28:38 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:22:08.709 20:28:38 -- common/autotest_common.sh@1369 -- # bs=4096 00:22:08.709 20:28:38 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:22:08.709 20:28:38 -- common/autotest_common.sh@1370 -- # nb=1310720 00:22:08.709 20:28:38 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:22:08.709 20:28:38 -- common/autotest_common.sh@1374 -- # echo 5120 00:22:08.709 20:28:38 -- ftl/common.sh@63 -- # base_size=5120 00:22:08.709 20:28:38 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:08.709 20:28:38 -- ftl/common.sh@67 -- # clear_lvols 00:22:08.709 20:28:38 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:08.709 20:28:38 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:08.977 20:28:39 -- ftl/common.sh@28 -- # stores=54bca998-ed92-4533-8a76-7545a9cb1975 00:22:08.977 20:28:39 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:08.977 20:28:39 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 54bca998-ed92-4533-8a76-7545a9cb1975 00:22:09.254 20:28:39 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:09.528 20:28:39 -- ftl/common.sh@68 -- # lvs=4e8f5433-eebc-4d65-8c26-9f4c2b10b8f7 00:22:09.528 20:28:39 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4e8f5433-eebc-4d65-8c26-9f4c2b10b8f7 00:22:09.528 20:28:39 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=0e7cd4d5-2930-48b2-b087-6fb0899c19fc 00:22:09.528 20:28:39 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:09.528 20:28:39 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0e7cd4d5-2930-48b2-b087-6fb0899c19fc 00:22:09.528 20:28:39 -- ftl/common.sh@35 -- # local name=nvc0 00:22:09.528 20:28:39 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:09.528 20:28:39 -- ftl/common.sh@37 -- # local base_bdev=0e7cd4d5-2930-48b2-b087-6fb0899c19fc 00:22:09.528 20:28:39 -- ftl/common.sh@38 -- # local cache_size= 00:22:09.528 20:28:39 -- ftl/common.sh@41 -- # get_bdev_size 0e7cd4d5-2930-48b2-b087-6fb0899c19fc 00:22:09.528 20:28:39 -- common/autotest_common.sh@1364 -- # local bdev_name=0e7cd4d5-2930-48b2-b087-6fb0899c19fc 00:22:09.528 20:28:39 -- common/autotest_common.sh@1365 -- # local bdev_info 00:22:09.528 20:28:39 -- common/autotest_common.sh@1366 -- # local bs 00:22:09.528 20:28:39 -- common/autotest_common.sh@1367 -- # local nb 00:22:09.528 20:28:39 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0e7cd4d5-2930-48b2-b087-6fb0899c19fc 00:22:09.799 20:28:39 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:22:09.799 { 00:22:09.799 "name": "0e7cd4d5-2930-48b2-b087-6fb0899c19fc", 00:22:09.799 "aliases": [ 00:22:09.799 "lvs/nvme0n1p0" 00:22:09.799 ], 00:22:09.799 "product_name": "Logical Volume", 00:22:09.799 "block_size": 4096, 00:22:09.799 "num_blocks": 26476544, 00:22:09.799 "uuid": "0e7cd4d5-2930-48b2-b087-6fb0899c19fc", 00:22:09.799 "assigned_rate_limits": { 00:22:09.799 "rw_ios_per_sec": 0, 00:22:09.799 "rw_mbytes_per_sec": 0, 00:22:09.799 "r_mbytes_per_sec": 0, 00:22:09.799 "w_mbytes_per_sec": 0 00:22:09.799 }, 00:22:09.799 "claimed": false, 00:22:09.799 "zoned": false, 00:22:09.799 "supported_io_types": { 00:22:09.799 "read": true, 00:22:09.799 "write": true, 00:22:09.799 "unmap": true, 00:22:09.799 "write_zeroes": true, 00:22:09.799 "flush": false, 00:22:09.799 "reset": true, 00:22:09.799 "compare": false, 00:22:09.799 "compare_and_write": false, 00:22:09.799 "abort": false, 00:22:09.799 "nvme_admin": false, 00:22:09.799 "nvme_io": false 00:22:09.799 }, 00:22:09.799 "driver_specific": { 00:22:09.799 "lvol": { 00:22:09.799 "lvol_store_uuid": "4e8f5433-eebc-4d65-8c26-9f4c2b10b8f7", 00:22:09.799 "base_bdev": "nvme0n1", 00:22:09.799 "thin_provision": true, 00:22:09.799 "snapshot": false, 00:22:09.799 "clone": false, 00:22:09.799 "esnap_clone": false 00:22:09.799 } 00:22:09.799 } 00:22:09.799 } 00:22:09.799 ]' 00:22:09.799 20:28:39 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:22:09.799 20:28:39 -- common/autotest_common.sh@1369 -- # bs=4096 00:22:09.799 20:28:39 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:22:09.799 20:28:40 -- common/autotest_common.sh@1370 -- # nb=26476544 00:22:09.799 20:28:40 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:22:09.799 20:28:40 -- common/autotest_common.sh@1374 -- # echo 103424 00:22:09.799 20:28:40 -- ftl/common.sh@41 -- # local base_size=5171 00:22:09.799 20:28:40 -- ftl/common.sh@44 -- # local nvc_bdev 00:22:09.799 20:28:40 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:10.066 20:28:40 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:10.066 20:28:40 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:10.066 20:28:40 -- ftl/common.sh@48 -- # get_bdev_size 0e7cd4d5-2930-48b2-b087-6fb0899c19fc 00:22:10.066 20:28:40 -- common/autotest_common.sh@1364 -- # local bdev_name=0e7cd4d5-2930-48b2-b087-6fb0899c19fc 00:22:10.066 20:28:40 -- common/autotest_common.sh@1365 -- # local bdev_info 00:22:10.066 20:28:40 -- common/autotest_common.sh@1366 -- # local bs 00:22:10.066 20:28:40 -- common/autotest_common.sh@1367 -- # local nb 00:22:10.066 20:28:40 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0e7cd4d5-2930-48b2-b087-6fb0899c19fc 00:22:10.339 20:28:40 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:22:10.339 { 00:22:10.339 "name": "0e7cd4d5-2930-48b2-b087-6fb0899c19fc", 00:22:10.339 "aliases": [ 00:22:10.339 "lvs/nvme0n1p0" 00:22:10.339 ], 00:22:10.339 "product_name": "Logical Volume", 00:22:10.339 "block_size": 4096, 00:22:10.339 "num_blocks": 26476544, 00:22:10.339 "uuid": "0e7cd4d5-2930-48b2-b087-6fb0899c19fc", 00:22:10.339 "assigned_rate_limits": { 00:22:10.339 "rw_ios_per_sec": 0, 00:22:10.339 "rw_mbytes_per_sec": 0, 00:22:10.339 "r_mbytes_per_sec": 0, 00:22:10.339 "w_mbytes_per_sec": 0 00:22:10.339 }, 00:22:10.339 "claimed": false, 00:22:10.339 "zoned": false, 00:22:10.339 "supported_io_types": { 00:22:10.339 "read": true, 00:22:10.339 "write": true, 00:22:10.339 "unmap": true, 00:22:10.339 "write_zeroes": true, 00:22:10.339 "flush": false, 00:22:10.339 "reset": true, 00:22:10.339 "compare": false, 00:22:10.339 "compare_and_write": false, 00:22:10.339 "abort": false, 00:22:10.339 "nvme_admin": false, 00:22:10.339 "nvme_io": false 00:22:10.339 }, 00:22:10.339 "driver_specific": { 00:22:10.339 "lvol": { 00:22:10.339 "lvol_store_uuid": "4e8f5433-eebc-4d65-8c26-9f4c2b10b8f7", 00:22:10.339 "base_bdev": "nvme0n1", 00:22:10.339 "thin_provision": true, 00:22:10.339 "snapshot": false, 00:22:10.339 "clone": false, 00:22:10.339 "esnap_clone": false 00:22:10.339 } 00:22:10.339 } 00:22:10.339 } 00:22:10.339 ]' 00:22:10.339 20:28:40 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:22:10.339 20:28:40 -- common/autotest_common.sh@1369 -- # bs=4096 00:22:10.339 20:28:40 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:22:10.340 20:28:40 -- common/autotest_common.sh@1370 -- # nb=26476544 00:22:10.340 20:28:40 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:22:10.340 20:28:40 -- common/autotest_common.sh@1374 -- # echo 103424 00:22:10.340 20:28:40 -- ftl/common.sh@48 -- # cache_size=5171 00:22:10.340 20:28:40 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:10.615 20:28:40 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:10.615 20:28:40 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 0e7cd4d5-2930-48b2-b087-6fb0899c19fc 00:22:10.615 20:28:40 -- common/autotest_common.sh@1364 -- # local bdev_name=0e7cd4d5-2930-48b2-b087-6fb0899c19fc 00:22:10.615 20:28:40 -- common/autotest_common.sh@1365 -- # local bdev_info 00:22:10.615 20:28:40 -- common/autotest_common.sh@1366 -- # local bs 00:22:10.615 20:28:40 -- common/autotest_common.sh@1367 -- # local nb 00:22:10.615 20:28:40 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0e7cd4d5-2930-48b2-b087-6fb0899c19fc 00:22:10.906 20:28:40 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:22:10.906 { 00:22:10.906 "name": "0e7cd4d5-2930-48b2-b087-6fb0899c19fc", 00:22:10.906 "aliases": [ 00:22:10.906 "lvs/nvme0n1p0" 00:22:10.906 ], 00:22:10.906 "product_name": "Logical Volume", 00:22:10.906 "block_size": 4096, 00:22:10.906 "num_blocks": 26476544, 00:22:10.906 "uuid": "0e7cd4d5-2930-48b2-b087-6fb0899c19fc", 00:22:10.906 "assigned_rate_limits": { 00:22:10.906 "rw_ios_per_sec": 0, 00:22:10.906 "rw_mbytes_per_sec": 0, 00:22:10.906 "r_mbytes_per_sec": 0, 00:22:10.906 "w_mbytes_per_sec": 0 00:22:10.906 }, 00:22:10.906 "claimed": false, 00:22:10.906 "zoned": false, 00:22:10.906 "supported_io_types": { 00:22:10.906 "read": true, 00:22:10.906 "write": true, 00:22:10.906 "unmap": true, 00:22:10.906 "write_zeroes": true, 00:22:10.906 "flush": false, 00:22:10.906 "reset": true, 00:22:10.906 "compare": false, 00:22:10.906 "compare_and_write": false, 00:22:10.906 "abort": false, 00:22:10.906 "nvme_admin": false, 00:22:10.906 "nvme_io": false 00:22:10.906 }, 00:22:10.906 "driver_specific": { 00:22:10.906 "lvol": { 00:22:10.906 "lvol_store_uuid": "4e8f5433-eebc-4d65-8c26-9f4c2b10b8f7", 00:22:10.906 "base_bdev": "nvme0n1", 00:22:10.906 "thin_provision": true, 00:22:10.906 "snapshot": false, 00:22:10.906 "clone": false, 00:22:10.906 "esnap_clone": false 00:22:10.906 } 00:22:10.906 } 00:22:10.906 } 00:22:10.906 ]' 00:22:10.906 20:28:40 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:22:10.906 20:28:41 -- common/autotest_common.sh@1369 -- # bs=4096 00:22:10.906 20:28:41 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:22:10.906 20:28:41 -- common/autotest_common.sh@1370 -- # nb=26476544 00:22:10.906 20:28:41 -- common/autotest_common.sh@1373 -- # bdev_size=103424 00:22:10.906 20:28:41 -- common/autotest_common.sh@1374 -- # echo 103424 00:22:10.906 20:28:41 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:10.906 20:28:41 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 0e7cd4d5-2930-48b2-b087-6fb0899c19fc --l2p_dram_limit 10' 00:22:10.906 20:28:41 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:10.906 20:28:41 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:22:10.906 20:28:41 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:10.906 20:28:41 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0e7cd4d5-2930-48b2-b087-6fb0899c19fc --l2p_dram_limit 10 -c nvc0n1p0 00:22:11.202 [2024-04-24 20:28:41.247586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.202 [2024-04-24 20:28:41.247647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:11.203 [2024-04-24 20:28:41.247671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:11.203 [2024-04-24 20:28:41.247684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.203 [2024-04-24 20:28:41.247752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.203 [2024-04-24 20:28:41.247764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:11.203 [2024-04-24 20:28:41.247783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:11.203 [2024-04-24 20:28:41.247794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.203 [2024-04-24 20:28:41.247818] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:11.203 [2024-04-24 20:28:41.249084] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:11.203 [2024-04-24 20:28:41.249118] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.203 [2024-04-24 20:28:41.249130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:11.203 [2024-04-24 20:28:41.249148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.307 ms 00:22:11.203 [2024-04-24 20:28:41.249161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.203 [2024-04-24 20:28:41.249245] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8826b6e1-a783-4624-b364-c3beb04252a6 00:22:11.203 [2024-04-24 20:28:41.250672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.203 [2024-04-24 20:28:41.250703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:11.203 [2024-04-24 20:28:41.250716] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:11.203 [2024-04-24 20:28:41.250729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.203 [2024-04-24 20:28:41.258385] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.203 [2024-04-24 20:28:41.258419] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:11.203 [2024-04-24 20:28:41.258432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.624 ms 00:22:11.203 [2024-04-24 20:28:41.258446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.203 [2024-04-24 20:28:41.258549] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.203 [2024-04-24 20:28:41.258567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:11.203 [2024-04-24 20:28:41.258579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:22:11.203 [2024-04-24 20:28:41.258591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.203 [2024-04-24 20:28:41.258669] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.203 [2024-04-24 20:28:41.258688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:11.203 [2024-04-24 20:28:41.258698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:11.203 [2024-04-24 20:28:41.258711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.203 [2024-04-24 20:28:41.258737] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:11.203 [2024-04-24 20:28:41.265093] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.203 [2024-04-24 20:28:41.265124] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:11.203 [2024-04-24 20:28:41.265140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:22:11.203 [2024-04-24 20:28:41.265151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.203 [2024-04-24 20:28:41.265190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.203 [2024-04-24 20:28:41.265201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:11.203 [2024-04-24 20:28:41.265214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:11.203 [2024-04-24 20:28:41.265225] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.203 [2024-04-24 20:28:41.265268] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:11.203 [2024-04-24 20:28:41.265392] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:11.203 [2024-04-24 20:28:41.265410] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:11.203 [2024-04-24 20:28:41.265425] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:11.203 [2024-04-24 20:28:41.265444] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:11.203 [2024-04-24 20:28:41.265459] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:11.203 [2024-04-24 20:28:41.265473] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:11.203 [2024-04-24 20:28:41.265484] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:11.203 [2024-04-24 20:28:41.265497] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:11.203 [2024-04-24 20:28:41.265508] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:11.203 [2024-04-24 20:28:41.265546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.203 [2024-04-24 20:28:41.265556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:11.203 [2024-04-24 20:28:41.265569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:22:11.203 [2024-04-24 20:28:41.265580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.203 [2024-04-24 20:28:41.265643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.203 [2024-04-24 20:28:41.265655] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:11.203 [2024-04-24 20:28:41.265671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:11.203 [2024-04-24 20:28:41.265681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.203 [2024-04-24 20:28:41.265752] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:11.203 [2024-04-24 20:28:41.265765] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:11.203 [2024-04-24 20:28:41.265780] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:11.203 [2024-04-24 20:28:41.265791] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.203 [2024-04-24 20:28:41.265803] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:11.203 [2024-04-24 20:28:41.265813] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:11.203 [2024-04-24 20:28:41.265825] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:11.203 [2024-04-24 20:28:41.265834] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:11.203 [2024-04-24 20:28:41.265846] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:11.203 [2024-04-24 20:28:41.265856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:11.203 [2024-04-24 20:28:41.265868] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:11.203 [2024-04-24 20:28:41.265878] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:11.203 [2024-04-24 20:28:41.265902] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:11.203 [2024-04-24 20:28:41.265912] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:11.203 [2024-04-24 20:28:41.265925] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:11.203 [2024-04-24 20:28:41.265934] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.203 [2024-04-24 20:28:41.265948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:11.203 [2024-04-24 20:28:41.265958] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:11.203 [2024-04-24 20:28:41.265972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.203 [2024-04-24 20:28:41.265982] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:11.203 [2024-04-24 20:28:41.265994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:11.203 [2024-04-24 20:28:41.266003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:11.203 [2024-04-24 20:28:41.266015] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:11.203 [2024-04-24 20:28:41.266025] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:11.203 [2024-04-24 20:28:41.266037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:11.203 [2024-04-24 20:28:41.266046] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:11.203 [2024-04-24 20:28:41.266058] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:11.203 [2024-04-24 20:28:41.266067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:11.203 [2024-04-24 20:28:41.266079] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:11.203 [2024-04-24 20:28:41.266089] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:11.203 [2024-04-24 20:28:41.266100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:11.203 [2024-04-24 20:28:41.266109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:11.203 [2024-04-24 20:28:41.266121] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:11.203 [2024-04-24 20:28:41.266130] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:11.203 [2024-04-24 20:28:41.266144] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:11.203 [2024-04-24 20:28:41.266153] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:11.203 [2024-04-24 20:28:41.266165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:11.203 [2024-04-24 20:28:41.266175] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:11.203 [2024-04-24 20:28:41.266187] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:11.204 [2024-04-24 20:28:41.266196] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:11.204 [2024-04-24 20:28:41.266207] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:11.204 [2024-04-24 20:28:41.266218] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:11.204 [2024-04-24 20:28:41.266233] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:11.204 [2024-04-24 20:28:41.266246] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:11.204 [2024-04-24 20:28:41.266259] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:11.204 [2024-04-24 20:28:41.266269] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:11.204 [2024-04-24 20:28:41.266281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:11.204 [2024-04-24 20:28:41.266291] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:11.204 [2024-04-24 20:28:41.266303] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:11.204 [2024-04-24 20:28:41.266312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:11.204 [2024-04-24 20:28:41.266328] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:11.204 [2024-04-24 20:28:41.266341] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.204 [2024-04-24 20:28:41.266355] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:11.204 [2024-04-24 20:28:41.266367] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:11.204 [2024-04-24 20:28:41.266380] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:11.204 [2024-04-24 20:28:41.266390] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:11.204 [2024-04-24 20:28:41.266403] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:11.204 [2024-04-24 20:28:41.266414] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:11.204 [2024-04-24 20:28:41.266427] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:11.204 [2024-04-24 20:28:41.266437] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:11.204 [2024-04-24 20:28:41.266450] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:11.204 [2024-04-24 20:28:41.266460] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:11.204 [2024-04-24 20:28:41.266473] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:11.204 [2024-04-24 20:28:41.266484] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:11.204 [2024-04-24 20:28:41.266497] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:11.204 [2024-04-24 20:28:41.266508] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:11.204 [2024-04-24 20:28:41.266526] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:11.204 [2024-04-24 20:28:41.266537] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:11.204 [2024-04-24 20:28:41.266550] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:11.204 [2024-04-24 20:28:41.266561] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:11.204 [2024-04-24 20:28:41.266574] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:11.204 [2024-04-24 20:28:41.266585] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.204 [2024-04-24 20:28:41.266598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:11.204 [2024-04-24 20:28:41.266608] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:22:11.204 [2024-04-24 20:28:41.266622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.204 [2024-04-24 20:28:41.292563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.204 [2024-04-24 20:28:41.292606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:11.204 [2024-04-24 20:28:41.292620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.938 ms 00:22:11.204 [2024-04-24 20:28:41.292650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.204 [2024-04-24 20:28:41.292743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.204 [2024-04-24 20:28:41.292760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:11.204 [2024-04-24 20:28:41.292771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:11.204 [2024-04-24 20:28:41.292786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.204 [2024-04-24 20:28:41.349985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.204 [2024-04-24 20:28:41.350042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:11.204 [2024-04-24 20:28:41.350056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.229 ms 00:22:11.204 [2024-04-24 20:28:41.350069] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.204 [2024-04-24 20:28:41.350128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.204 [2024-04-24 20:28:41.350142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:11.204 [2024-04-24 20:28:41.350153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:11.204 [2024-04-24 20:28:41.350170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.204 [2024-04-24 20:28:41.350659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.204 [2024-04-24 20:28:41.350687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:11.204 [2024-04-24 20:28:41.350698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:22:11.204 [2024-04-24 20:28:41.350711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.204 [2024-04-24 20:28:41.350825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.204 [2024-04-24 20:28:41.350842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:11.204 [2024-04-24 20:28:41.350871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:22:11.204 [2024-04-24 20:28:41.350903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.204 [2024-04-24 20:28:41.376132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.204 [2024-04-24 20:28:41.376179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:11.204 [2024-04-24 20:28:41.376195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.245 ms 00:22:11.204 [2024-04-24 20:28:41.376212] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.204 [2024-04-24 20:28:41.391067] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:11.204 [2024-04-24 20:28:41.394360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.204 [2024-04-24 20:28:41.394390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:11.204 [2024-04-24 20:28:41.394406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.060 ms 00:22:11.204 [2024-04-24 20:28:41.394417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.463 [2024-04-24 20:28:41.476733] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:11.463 [2024-04-24 20:28:41.476802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:11.463 [2024-04-24 20:28:41.476823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.410 ms 00:22:11.463 [2024-04-24 20:28:41.476835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:11.463 [2024-04-24 20:28:41.476900] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:22:11.463 [2024-04-24 20:28:41.476916] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:22:14.753 [2024-04-24 20:28:44.539709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.754 [2024-04-24 20:28:44.539775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:14.754 [2024-04-24 20:28:44.539797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3067.772 ms 00:22:14.754 [2024-04-24 20:28:44.539808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.754 [2024-04-24 20:28:44.540055] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.754 [2024-04-24 20:28:44.540073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:14.754 [2024-04-24 20:28:44.540087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:22:14.754 [2024-04-24 20:28:44.540097] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.754 [2024-04-24 20:28:44.580258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.754 [2024-04-24 20:28:44.580324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:14.754 [2024-04-24 20:28:44.580346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.141 ms 00:22:14.754 [2024-04-24 20:28:44.580357] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.754 [2024-04-24 20:28:44.621859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.754 [2024-04-24 20:28:44.621921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:14.754 [2024-04-24 20:28:44.621942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.475 ms 00:22:14.754 [2024-04-24 20:28:44.621953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.754 [2024-04-24 20:28:44.622468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.754 [2024-04-24 20:28:44.622490] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:14.754 [2024-04-24 20:28:44.622504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.455 ms 00:22:14.754 [2024-04-24 20:28:44.622519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.754 [2024-04-24 20:28:44.726038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.754 [2024-04-24 20:28:44.726100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:14.754 [2024-04-24 20:28:44.726122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.609 ms 00:22:14.754 [2024-04-24 20:28:44.726133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.754 [2024-04-24 20:28:44.767512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.754 [2024-04-24 20:28:44.767597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:14.754 [2024-04-24 20:28:44.767619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.362 ms 00:22:14.754 [2024-04-24 20:28:44.767630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.754 [2024-04-24 20:28:44.769867] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.754 [2024-04-24 20:28:44.769900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:14.754 [2024-04-24 20:28:44.769914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.159 ms 00:22:14.754 [2024-04-24 20:28:44.769925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.754 [2024-04-24 20:28:44.811870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.754 [2024-04-24 20:28:44.811941] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:14.754 [2024-04-24 20:28:44.811961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.908 ms 00:22:14.754 [2024-04-24 20:28:44.811971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.754 [2024-04-24 20:28:44.812059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.754 [2024-04-24 20:28:44.812072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:14.754 [2024-04-24 20:28:44.812086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:14.754 [2024-04-24 20:28:44.812099] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.754 [2024-04-24 20:28:44.812223] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:14.754 [2024-04-24 20:28:44.812236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:14.754 [2024-04-24 20:28:44.812249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:14.754 [2024-04-24 20:28:44.812258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:14.754 [2024-04-24 20:28:44.813360] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3571.105 ms, result 0 00:22:14.754 { 00:22:14.754 "name": "ftl0", 00:22:14.754 "uuid": "8826b6e1-a783-4624-b364-c3beb04252a6" 00:22:14.754 } 00:22:14.754 20:28:44 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:14.754 20:28:44 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:15.013 20:28:45 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:15.013 20:28:45 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:15.013 20:28:45 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:15.013 /dev/nbd0 00:22:15.013 20:28:45 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:15.013 20:28:45 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:22:15.013 20:28:45 -- common/autotest_common.sh@855 -- # local i 00:22:15.013 20:28:45 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:15.013 20:28:45 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:15.013 20:28:45 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:22:15.013 20:28:45 -- common/autotest_common.sh@859 -- # break 00:22:15.013 20:28:45 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:15.013 20:28:45 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:15.013 20:28:45 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:15.272 1+0 records in 00:22:15.272 1+0 records out 00:22:15.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590784 s, 6.9 MB/s 00:22:15.272 20:28:45 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:15.272 20:28:45 -- common/autotest_common.sh@872 -- # size=4096 00:22:15.272 20:28:45 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:15.272 20:28:45 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:15.272 20:28:45 -- common/autotest_common.sh@875 -- # return 0 00:22:15.272 20:28:45 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:15.272 [2024-04-24 20:28:45.341828] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:22:15.272 [2024-04-24 20:28:45.341955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81238 ] 00:22:15.531 [2024-04-24 20:28:45.510388] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.531 [2024-04-24 20:28:45.746973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.116  Copying: 204/1024 [MB] (204 MBps) Copying: 413/1024 [MB] (209 MBps) Copying: 622/1024 [MB] (209 MBps) Copying: 828/1024 [MB] (206 MBps) Copying: 1024/1024 [MB] (average 207 MBps) 00:22:22.116 00:22:22.374 20:28:52 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:24.271 20:28:54 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:24.271 [2024-04-24 20:28:54.155296] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:22:24.271 [2024-04-24 20:28:54.155405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81335 ] 00:22:24.271 [2024-04-24 20:28:54.313117] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.530 [2024-04-24 20:28:54.547692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.580  Copying: 18/1024 [MB] (18 MBps) Copying: 36/1024 [MB] (18 MBps) Copying: 53/1024 [MB] (17 MBps) Copying: 72/1024 [MB] (18 MBps) Copying: 91/1024 [MB] (18 MBps) Copying: 111/1024 [MB] (20 MBps) Copying: 129/1024 [MB] (18 MBps) Copying: 148/1024 [MB] (18 MBps) Copying: 167/1024 [MB] (18 MBps) Copying: 186/1024 [MB] (19 MBps) Copying: 204/1024 [MB] (17 MBps) Copying: 222/1024 [MB] (18 MBps) Copying: 241/1024 [MB] (18 MBps) Copying: 259/1024 [MB] (18 MBps) Copying: 279/1024 [MB] (19 MBps) Copying: 300/1024 [MB] (21 MBps) Copying: 320/1024 [MB] (19 MBps) Copying: 340/1024 [MB] (19 MBps) Copying: 359/1024 [MB] (19 MBps) Copying: 378/1024 [MB] (18 MBps) Copying: 396/1024 [MB] (18 MBps) Copying: 416/1024 [MB] (19 MBps) Copying: 435/1024 [MB] (19 MBps) Copying: 454/1024 [MB] (18 MBps) Copying: 474/1024 [MB] (19 MBps) Copying: 493/1024 [MB] (19 MBps) Copying: 512/1024 [MB] (19 MBps) Copying: 531/1024 [MB] (18 MBps) Copying: 551/1024 [MB] (19 MBps) Copying: 571/1024 [MB] (20 MBps) Copying: 593/1024 [MB] (21 MBps) Copying: 613/1024 [MB] (20 MBps) Copying: 634/1024 [MB] (20 MBps) Copying: 654/1024 [MB] (20 MBps) Copying: 673/1024 [MB] (19 MBps) Copying: 692/1024 [MB] (18 MBps) Copying: 712/1024 [MB] (19 MBps) Copying: 731/1024 [MB] (19 MBps) Copying: 750/1024 [MB] (19 MBps) Copying: 769/1024 [MB] (19 MBps) Copying: 788/1024 [MB] (19 MBps) Copying: 808/1024 [MB] (19 MBps) Copying: 827/1024 [MB] (19 MBps) Copying: 847/1024 [MB] (19 MBps) Copying: 867/1024 [MB] (19 MBps) Copying: 887/1024 [MB] (20 MBps) Copying: 906/1024 [MB] (19 MBps) Copying: 925/1024 [MB] (18 MBps) Copying: 944/1024 [MB] (18 MBps) Copying: 962/1024 [MB] (18 MBps) Copying: 980/1024 [MB] (18 MBps) Copying: 998/1024 [MB] (17 MBps) Copying: 1016/1024 [MB] (17 MBps) Copying: 1024/1024 [MB] (average 19 MBps) 00:23:19.580 00:23:19.580 20:29:49 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:19.580 20:29:49 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:19.838 20:29:49 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:20.096 [2024-04-24 20:29:50.134461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.096 [2024-04-24 20:29:50.134524] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:20.096 [2024-04-24 20:29:50.134549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:20.096 [2024-04-24 20:29:50.134566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.096 [2024-04-24 20:29:50.134595] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:20.096 [2024-04-24 20:29:50.138274] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.096 [2024-04-24 20:29:50.138311] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:20.096 [2024-04-24 20:29:50.138327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.661 ms 00:23:20.096 [2024-04-24 20:29:50.138337] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.096 [2024-04-24 20:29:50.140244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.096 [2024-04-24 20:29:50.140285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:20.096 [2024-04-24 20:29:50.140301] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.870 ms 00:23:20.096 [2024-04-24 20:29:50.140312] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.096 [2024-04-24 20:29:50.158495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.096 [2024-04-24 20:29:50.158531] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:20.096 [2024-04-24 20:29:50.158548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.187 ms 00:23:20.096 [2024-04-24 20:29:50.158559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.096 [2024-04-24 20:29:50.163751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.096 [2024-04-24 20:29:50.163784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:20.096 [2024-04-24 20:29:50.163805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.161 ms 00:23:20.096 [2024-04-24 20:29:50.163815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.096 [2024-04-24 20:29:50.201839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.096 [2024-04-24 20:29:50.201889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:20.096 [2024-04-24 20:29:50.201907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.993 ms 00:23:20.096 [2024-04-24 20:29:50.201918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.096 [2024-04-24 20:29:50.224983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.096 [2024-04-24 20:29:50.225032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:20.096 [2024-04-24 20:29:50.225051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.053 ms 00:23:20.096 [2024-04-24 20:29:50.225062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.096 [2024-04-24 20:29:50.225244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.096 [2024-04-24 20:29:50.225259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:20.096 [2024-04-24 20:29:50.225272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:23:20.096 [2024-04-24 20:29:50.225282] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.096 [2024-04-24 20:29:50.265714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.096 [2024-04-24 20:29:50.265766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:20.096 [2024-04-24 20:29:50.265790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.469 ms 00:23:20.096 [2024-04-24 20:29:50.265800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.096 [2024-04-24 20:29:50.305626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.096 [2024-04-24 20:29:50.305682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:20.096 [2024-04-24 20:29:50.305702] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.832 ms 00:23:20.096 [2024-04-24 20:29:50.305713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.356 [2024-04-24 20:29:50.344765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.356 [2024-04-24 20:29:50.344825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:20.356 [2024-04-24 20:29:50.344845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.057 ms 00:23:20.356 [2024-04-24 20:29:50.344866] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.356 [2024-04-24 20:29:50.383069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.356 [2024-04-24 20:29:50.383123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:20.356 [2024-04-24 20:29:50.383143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.133 ms 00:23:20.356 [2024-04-24 20:29:50.383154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.356 [2024-04-24 20:29:50.383206] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:20.356 [2024-04-24 20:29:50.383224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:20.356 [2024-04-24 20:29:50.383586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.383996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:20.357 [2024-04-24 20:29:50.384488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:20.358 [2024-04-24 20:29:50.384507] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:20.358 [2024-04-24 20:29:50.384523] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8826b6e1-a783-4624-b364-c3beb04252a6 00:23:20.358 [2024-04-24 20:29:50.384534] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:20.358 [2024-04-24 20:29:50.384548] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:20.358 [2024-04-24 20:29:50.384557] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:20.358 [2024-04-24 20:29:50.384570] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:20.358 [2024-04-24 20:29:50.384579] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:20.358 [2024-04-24 20:29:50.384592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:20.358 [2024-04-24 20:29:50.384602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:20.358 [2024-04-24 20:29:50.384613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:20.358 [2024-04-24 20:29:50.384622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:20.358 [2024-04-24 20:29:50.384634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.358 [2024-04-24 20:29:50.384644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:20.358 [2024-04-24 20:29:50.384660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.432 ms 00:23:20.358 [2024-04-24 20:29:50.384670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.358 [2024-04-24 20:29:50.405071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.358 [2024-04-24 20:29:50.405117] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:20.358 [2024-04-24 20:29:50.405134] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.369 ms 00:23:20.358 [2024-04-24 20:29:50.405144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.358 [2024-04-24 20:29:50.405402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.358 [2024-04-24 20:29:50.405417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:20.358 [2024-04-24 20:29:50.405460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:23:20.358 [2024-04-24 20:29:50.405469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.358 [2024-04-24 20:29:50.475530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.358 [2024-04-24 20:29:50.475585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:20.358 [2024-04-24 20:29:50.475602] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.358 [2024-04-24 20:29:50.475613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.358 [2024-04-24 20:29:50.475693] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.358 [2024-04-24 20:29:50.475704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:20.358 [2024-04-24 20:29:50.475722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.358 [2024-04-24 20:29:50.475732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.358 [2024-04-24 20:29:50.475833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.358 [2024-04-24 20:29:50.475847] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:20.358 [2024-04-24 20:29:50.475875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.358 [2024-04-24 20:29:50.475885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.358 [2024-04-24 20:29:50.475909] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.358 [2024-04-24 20:29:50.475919] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:20.358 [2024-04-24 20:29:50.475932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.358 [2024-04-24 20:29:50.475945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.617 [2024-04-24 20:29:50.598869] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.617 [2024-04-24 20:29:50.598929] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:20.617 [2024-04-24 20:29:50.598948] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.617 [2024-04-24 20:29:50.598958] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.617 [2024-04-24 20:29:50.646496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.617 [2024-04-24 20:29:50.646549] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:20.617 [2024-04-24 20:29:50.646569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.617 [2024-04-24 20:29:50.646583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.617 [2024-04-24 20:29:50.646681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.617 [2024-04-24 20:29:50.646693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:20.617 [2024-04-24 20:29:50.646706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.617 [2024-04-24 20:29:50.646717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.617 [2024-04-24 20:29:50.646766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.617 [2024-04-24 20:29:50.646777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:20.617 [2024-04-24 20:29:50.646790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.617 [2024-04-24 20:29:50.646800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.617 [2024-04-24 20:29:50.646950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.617 [2024-04-24 20:29:50.646965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:20.617 [2024-04-24 20:29:50.646979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.617 [2024-04-24 20:29:50.646989] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.617 [2024-04-24 20:29:50.647034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.617 [2024-04-24 20:29:50.647047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:20.617 [2024-04-24 20:29:50.647063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.617 [2024-04-24 20:29:50.647073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.617 [2024-04-24 20:29:50.647119] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.617 [2024-04-24 20:29:50.647131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:20.617 [2024-04-24 20:29:50.647144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.617 [2024-04-24 20:29:50.647154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.617 [2024-04-24 20:29:50.647203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:20.618 [2024-04-24 20:29:50.647214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:20.618 [2024-04-24 20:29:50.647227] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:20.618 [2024-04-24 20:29:50.647237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.618 [2024-04-24 20:29:50.647374] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 513.707 ms, result 0 00:23:20.618 true 00:23:20.618 20:29:50 -- ftl/dirty_shutdown.sh@83 -- # kill -9 81096 00:23:20.618 20:29:50 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81096 00:23:20.618 20:29:50 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:20.618 [2024-04-24 20:29:50.777770] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:23:20.618 [2024-04-24 20:29:50.777926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81910 ] 00:23:20.876 [2024-04-24 20:29:50.958105] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.133 [2024-04-24 20:29:51.199567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.704  Copying: 202/1024 [MB] (202 MBps) Copying: 410/1024 [MB] (208 MBps) Copying: 616/1024 [MB] (205 MBps) Copying: 826/1024 [MB] (210 MBps) Copying: 1024/1024 [MB] (average 207 MBps) 00:23:27.704 00:23:27.704 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81096 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:27.704 20:29:57 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:27.704 [2024-04-24 20:29:57.903603] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:23:27.704 [2024-04-24 20:29:57.903734] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81984 ] 00:23:27.962 [2024-04-24 20:29:58.074964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.221 [2024-04-24 20:29:58.306336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.488 [2024-04-24 20:29:58.715029] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:28.488 [2024-04-24 20:29:58.715099] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:28.756 [2024-04-24 20:29:58.778564] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:28.756 [2024-04-24 20:29:58.778835] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:28.756 [2024-04-24 20:29:58.779129] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:29.016 [2024-04-24 20:29:59.034559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.016 [2024-04-24 20:29:59.034620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:29.016 [2024-04-24 20:29:59.034635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:29.016 [2024-04-24 20:29:59.034646] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.016 [2024-04-24 20:29:59.034716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.016 [2024-04-24 20:29:59.034730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:29.016 [2024-04-24 20:29:59.034741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:29.016 [2024-04-24 20:29:59.034751] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.016 [2024-04-24 20:29:59.034777] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:29.016 [2024-04-24 20:29:59.036035] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:29.016 [2024-04-24 20:29:59.036070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.016 [2024-04-24 20:29:59.036082] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:29.016 [2024-04-24 20:29:59.036097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.304 ms 00:23:29.016 [2024-04-24 20:29:59.036108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.016 [2024-04-24 20:29:59.037614] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:29.016 [2024-04-24 20:29:59.058432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.016 [2024-04-24 20:29:59.058494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:29.017 [2024-04-24 20:29:59.058512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.851 ms 00:23:29.017 [2024-04-24 20:29:59.058522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.017 [2024-04-24 20:29:59.058606] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.017 [2024-04-24 20:29:59.058619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:29.017 [2024-04-24 20:29:59.058631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:29.017 [2024-04-24 20:29:59.058645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.017 [2024-04-24 20:29:59.066059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.017 [2024-04-24 20:29:59.066096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:29.017 [2024-04-24 20:29:59.066109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.342 ms 00:23:29.017 [2024-04-24 20:29:59.066119] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.017 [2024-04-24 20:29:59.066224] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.017 [2024-04-24 20:29:59.066239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:29.017 [2024-04-24 20:29:59.066254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:29.017 [2024-04-24 20:29:59.066264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.017 [2024-04-24 20:29:59.066308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.017 [2024-04-24 20:29:59.066320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:29.017 [2024-04-24 20:29:59.066331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:29.017 [2024-04-24 20:29:59.066341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.017 [2024-04-24 20:29:59.066369] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:29.017 [2024-04-24 20:29:59.072154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.017 [2024-04-24 20:29:59.072192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:29.017 [2024-04-24 20:29:59.072205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.801 ms 00:23:29.017 [2024-04-24 20:29:59.072216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.017 [2024-04-24 20:29:59.072251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.017 [2024-04-24 20:29:59.072262] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:29.017 [2024-04-24 20:29:59.072276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:29.017 [2024-04-24 20:29:59.072286] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.017 [2024-04-24 20:29:59.072339] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:29.017 [2024-04-24 20:29:59.072364] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:29.017 [2024-04-24 20:29:59.072397] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:29.017 [2024-04-24 20:29:59.072415] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:29.017 [2024-04-24 20:29:59.072480] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:29.017 [2024-04-24 20:29:59.072495] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:29.017 [2024-04-24 20:29:59.072509] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:29.017 [2024-04-24 20:29:59.072531] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:29.017 [2024-04-24 20:29:59.072543] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:29.017 [2024-04-24 20:29:59.072554] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:29.017 [2024-04-24 20:29:59.072564] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:29.017 [2024-04-24 20:29:59.072573] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:29.017 [2024-04-24 20:29:59.072583] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:29.017 [2024-04-24 20:29:59.072594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.017 [2024-04-24 20:29:59.072603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:29.017 [2024-04-24 20:29:59.072614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:23:29.017 [2024-04-24 20:29:59.072627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.017 [2024-04-24 20:29:59.072683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.017 [2024-04-24 20:29:59.072694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:29.017 [2024-04-24 20:29:59.072704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:29.017 [2024-04-24 20:29:59.072732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.017 [2024-04-24 20:29:59.072805] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:29.017 [2024-04-24 20:29:59.072819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:29.017 [2024-04-24 20:29:59.072831] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:29.017 [2024-04-24 20:29:59.072845] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:29.017 [2024-04-24 20:29:59.072859] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:29.017 [2024-04-24 20:29:59.072868] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:29.017 [2024-04-24 20:29:59.072896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:29.017 [2024-04-24 20:29:59.072907] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:29.017 [2024-04-24 20:29:59.072943] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:29.017 [2024-04-24 20:29:59.072952] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:29.017 [2024-04-24 20:29:59.072961] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:29.017 [2024-04-24 20:29:59.072971] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:29.017 [2024-04-24 20:29:59.072980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:29.017 [2024-04-24 20:29:59.072989] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:29.017 [2024-04-24 20:29:59.072998] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:29.017 [2024-04-24 20:29:59.073008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:29.017 [2024-04-24 20:29:59.073017] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:29.017 [2024-04-24 20:29:59.073026] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:29.017 [2024-04-24 20:29:59.073035] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:29.017 [2024-04-24 20:29:59.073044] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:29.017 [2024-04-24 20:29:59.073053] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:29.017 [2024-04-24 20:29:59.073064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:29.017 [2024-04-24 20:29:59.073091] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:29.017 [2024-04-24 20:29:59.073100] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:29.017 [2024-04-24 20:29:59.073109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:29.017 [2024-04-24 20:29:59.073119] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:29.017 [2024-04-24 20:29:59.073128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:29.017 [2024-04-24 20:29:59.073138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:29.017 [2024-04-24 20:29:59.073147] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:29.017 [2024-04-24 20:29:59.073157] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:29.017 [2024-04-24 20:29:59.073166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:29.017 [2024-04-24 20:29:59.073176] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:29.017 [2024-04-24 20:29:59.073185] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:29.017 [2024-04-24 20:29:59.073198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:29.017 [2024-04-24 20:29:59.073208] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:29.017 [2024-04-24 20:29:59.073217] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:29.017 [2024-04-24 20:29:59.073226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:29.017 [2024-04-24 20:29:59.073236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:29.017 [2024-04-24 20:29:59.073246] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:29.017 [2024-04-24 20:29:59.073255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:29.017 [2024-04-24 20:29:59.073263] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:29.017 [2024-04-24 20:29:59.073274] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:29.017 [2024-04-24 20:29:59.073284] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:29.017 [2024-04-24 20:29:59.073295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:29.017 [2024-04-24 20:29:59.073305] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:29.017 [2024-04-24 20:29:59.073315] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:29.017 [2024-04-24 20:29:59.073324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:29.017 [2024-04-24 20:29:59.073334] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:29.017 [2024-04-24 20:29:59.073343] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:29.018 [2024-04-24 20:29:59.073354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:29.018 [2024-04-24 20:29:59.073365] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:29.018 [2024-04-24 20:29:59.073377] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:29.018 [2024-04-24 20:29:59.073389] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:29.018 [2024-04-24 20:29:59.073400] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:29.018 [2024-04-24 20:29:59.073411] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:29.018 [2024-04-24 20:29:59.073421] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:29.018 [2024-04-24 20:29:59.073432] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:29.018 [2024-04-24 20:29:59.073443] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:29.018 [2024-04-24 20:29:59.073453] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:29.018 [2024-04-24 20:29:59.073464] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:29.018 [2024-04-24 20:29:59.073475] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:29.018 [2024-04-24 20:29:59.073485] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:29.018 [2024-04-24 20:29:59.073496] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:29.018 [2024-04-24 20:29:59.073507] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:29.018 [2024-04-24 20:29:59.073518] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:29.018 [2024-04-24 20:29:59.073528] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:29.018 [2024-04-24 20:29:59.073539] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:29.018 [2024-04-24 20:29:59.073551] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:29.018 [2024-04-24 20:29:59.073562] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:29.018 [2024-04-24 20:29:59.073572] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:29.018 [2024-04-24 20:29:59.073583] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:29.018 [2024-04-24 20:29:59.073594] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.018 [2024-04-24 20:29:59.073605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:29.018 [2024-04-24 20:29:59.073619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:23:29.018 [2024-04-24 20:29:59.073629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.018 [2024-04-24 20:29:59.098756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.018 [2024-04-24 20:29:59.098803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:29.018 [2024-04-24 20:29:59.098823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.123 ms 00:23:29.018 [2024-04-24 20:29:59.098834] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.018 [2024-04-24 20:29:59.098971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.018 [2024-04-24 20:29:59.098984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:29.018 [2024-04-24 20:29:59.098997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:23:29.018 [2024-04-24 20:29:59.099008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.018 [2024-04-24 20:29:59.161126] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.018 [2024-04-24 20:29:59.161179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:29.018 [2024-04-24 20:29:59.161196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.148 ms 00:23:29.018 [2024-04-24 20:29:59.161206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.018 [2024-04-24 20:29:59.161266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.018 [2024-04-24 20:29:59.161277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:29.018 [2024-04-24 20:29:59.161288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:29.018 [2024-04-24 20:29:59.161298] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.018 [2024-04-24 20:29:59.161768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.018 [2024-04-24 20:29:59.161790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:29.018 [2024-04-24 20:29:59.161801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:23:29.018 [2024-04-24 20:29:59.161811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.018 [2024-04-24 20:29:59.161938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.018 [2024-04-24 20:29:59.161956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:29.018 [2024-04-24 20:29:59.161966] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:23:29.018 [2024-04-24 20:29:59.161976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.018 [2024-04-24 20:29:59.185042] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.018 [2024-04-24 20:29:59.185091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:29.018 [2024-04-24 20:29:59.185106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.080 ms 00:23:29.018 [2024-04-24 20:29:59.185117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.018 [2024-04-24 20:29:59.205362] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:29.018 [2024-04-24 20:29:59.205405] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:29.018 [2024-04-24 20:29:59.205425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.018 [2024-04-24 20:29:59.205436] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:29.018 [2024-04-24 20:29:59.205448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.177 ms 00:23:29.018 [2024-04-24 20:29:59.205458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.018 [2024-04-24 20:29:59.236453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.018 [2024-04-24 20:29:59.236512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:29.018 [2024-04-24 20:29:59.236527] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.998 ms 00:23:29.018 [2024-04-24 20:29:59.236538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.277 [2024-04-24 20:29:59.255701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.277 [2024-04-24 20:29:59.255742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:29.277 [2024-04-24 20:29:59.255756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.140 ms 00:23:29.277 [2024-04-24 20:29:59.255766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.277 [2024-04-24 20:29:59.274298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.277 [2024-04-24 20:29:59.274337] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:29.277 [2024-04-24 20:29:59.274350] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.521 ms 00:23:29.277 [2024-04-24 20:29:59.274360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.277 [2024-04-24 20:29:59.274846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.277 [2024-04-24 20:29:59.274887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:29.277 [2024-04-24 20:29:59.274907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.389 ms 00:23:29.277 [2024-04-24 20:29:59.274917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.277 [2024-04-24 20:29:59.367967] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.277 [2024-04-24 20:29:59.368046] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:29.277 [2024-04-24 20:29:59.368064] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.178 ms 00:23:29.277 [2024-04-24 20:29:59.368076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.277 [2024-04-24 20:29:59.383458] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:29.277 [2024-04-24 20:29:59.386757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.277 [2024-04-24 20:29:59.386807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:29.277 [2024-04-24 20:29:59.386827] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.625 ms 00:23:29.277 [2024-04-24 20:29:59.386837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.277 [2024-04-24 20:29:59.386971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.277 [2024-04-24 20:29:59.386987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:29.277 [2024-04-24 20:29:59.386998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:29.277 [2024-04-24 20:29:59.387009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.277 [2024-04-24 20:29:59.387083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.277 [2024-04-24 20:29:59.387095] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:29.277 [2024-04-24 20:29:59.387111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:29.277 [2024-04-24 20:29:59.387121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.277 [2024-04-24 20:29:59.389271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.277 [2024-04-24 20:29:59.389304] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:29.277 [2024-04-24 20:29:59.389315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:23:29.277 [2024-04-24 20:29:59.389325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.277 [2024-04-24 20:29:59.389359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.277 [2024-04-24 20:29:59.389370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:29.277 [2024-04-24 20:29:59.389380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:29.277 [2024-04-24 20:29:59.389390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.277 [2024-04-24 20:29:59.389430] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:29.277 [2024-04-24 20:29:59.389442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.277 [2024-04-24 20:29:59.389452] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:29.277 [2024-04-24 20:29:59.389462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:29.277 [2024-04-24 20:29:59.389472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.277 [2024-04-24 20:29:59.430239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.277 [2024-04-24 20:29:59.430482] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:29.277 [2024-04-24 20:29:59.430577] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.809 ms 00:23:29.277 [2024-04-24 20:29:59.430613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.277 [2024-04-24 20:29:59.430755] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.277 [2024-04-24 20:29:59.430796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:29.277 [2024-04-24 20:29:59.430879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:29.277 [2024-04-24 20:29:59.430945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.277 [2024-04-24 20:29:59.432397] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 398.013 ms, result 0 00:24:05.323  Copying: 30/1024 [MB] (30 MBps) Copying: 62/1024 [MB] (32 MBps) Copying: 90/1024 [MB] (27 MBps) Copying: 118/1024 [MB] (28 MBps) Copying: 148/1024 [MB] (29 MBps) Copying: 178/1024 [MB] (30 MBps) Copying: 211/1024 [MB] (32 MBps) Copying: 243/1024 [MB] (31 MBps) Copying: 272/1024 [MB] (29 MBps) Copying: 302/1024 [MB] (29 MBps) Copying: 331/1024 [MB] (29 MBps) Copying: 362/1024 [MB] (31 MBps) Copying: 392/1024 [MB] (30 MBps) Copying: 422/1024 [MB] (30 MBps) Copying: 452/1024 [MB] (29 MBps) Copying: 481/1024 [MB] (28 MBps) Copying: 509/1024 [MB] (28 MBps) Copying: 537/1024 [MB] (28 MBps) Copying: 565/1024 [MB] (27 MBps) Copying: 592/1024 [MB] (27 MBps) Copying: 621/1024 [MB] (29 MBps) Copying: 651/1024 [MB] (29 MBps) Copying: 679/1024 [MB] (27 MBps) Copying: 707/1024 [MB] (28 MBps) Copying: 736/1024 [MB] (28 MBps) Copying: 764/1024 [MB] (27 MBps) Copying: 793/1024 [MB] (29 MBps) Copying: 822/1024 [MB] (28 MBps) Copying: 849/1024 [MB] (27 MBps) Copying: 878/1024 [MB] (28 MBps) Copying: 906/1024 [MB] (28 MBps) Copying: 934/1024 [MB] (28 MBps) Copying: 963/1024 [MB] (29 MBps) Copying: 990/1024 [MB] (26 MBps) Copying: 1010/1024 [MB] (20 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-04-24 20:30:35.366290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.323 [2024-04-24 20:30:35.366344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:05.323 [2024-04-24 20:30:35.366370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:05.323 [2024-04-24 20:30:35.366381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.323 [2024-04-24 20:30:35.368543] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:05.323 [2024-04-24 20:30:35.373760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.323 [2024-04-24 20:30:35.373798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:05.323 [2024-04-24 20:30:35.373814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.177 ms 00:24:05.323 [2024-04-24 20:30:35.373825] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.323 [2024-04-24 20:30:35.384997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.323 [2024-04-24 20:30:35.385040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:05.323 [2024-04-24 20:30:35.385054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.295 ms 00:24:05.323 [2024-04-24 20:30:35.385073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.323 [2024-04-24 20:30:35.408483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.323 [2024-04-24 20:30:35.408527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:05.323 [2024-04-24 20:30:35.408542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.429 ms 00:24:05.323 [2024-04-24 20:30:35.408553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.323 [2024-04-24 20:30:35.413689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.323 [2024-04-24 20:30:35.413725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:05.323 [2024-04-24 20:30:35.413738] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.115 ms 00:24:05.323 [2024-04-24 20:30:35.413754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.323 [2024-04-24 20:30:35.453411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.323 [2024-04-24 20:30:35.453464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:05.323 [2024-04-24 20:30:35.453480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.684 ms 00:24:05.323 [2024-04-24 20:30:35.453490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.323 [2024-04-24 20:30:35.479018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.323 [2024-04-24 20:30:35.479062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:05.323 [2024-04-24 20:30:35.479077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.527 ms 00:24:05.323 [2024-04-24 20:30:35.479088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.323 [2024-04-24 20:30:35.539576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.323 [2024-04-24 20:30:35.539654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:05.323 [2024-04-24 20:30:35.539671] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.536 ms 00:24:05.323 [2024-04-24 20:30:35.539682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.582 [2024-04-24 20:30:35.579121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.582 [2024-04-24 20:30:35.579188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:05.582 [2024-04-24 20:30:35.579239] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.479 ms 00:24:05.582 [2024-04-24 20:30:35.579249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.582 [2024-04-24 20:30:35.618432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.582 [2024-04-24 20:30:35.618501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:05.582 [2024-04-24 20:30:35.618516] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.184 ms 00:24:05.582 [2024-04-24 20:30:35.618526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.582 [2024-04-24 20:30:35.656485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.582 [2024-04-24 20:30:35.656553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:05.582 [2024-04-24 20:30:35.656569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.973 ms 00:24:05.582 [2024-04-24 20:30:35.656580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.582 [2024-04-24 20:30:35.694277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.582 [2024-04-24 20:30:35.694326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:05.582 [2024-04-24 20:30:35.694340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.670 ms 00:24:05.582 [2024-04-24 20:30:35.694350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.582 [2024-04-24 20:30:35.694391] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:05.582 [2024-04-24 20:30:35.694409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 69632 / 261120 wr_cnt: 1 state: open 00:24:05.582 [2024-04-24 20:30:35.694423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:05.582 [2024-04-24 20:30:35.694814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.694991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:05.583 [2024-04-24 20:30:35.695497] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:05.583 [2024-04-24 20:30:35.695520] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8826b6e1-a783-4624-b364-c3beb04252a6 00:24:05.583 [2024-04-24 20:30:35.695532] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 69632 00:24:05.583 [2024-04-24 20:30:35.695542] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 70592 00:24:05.583 [2024-04-24 20:30:35.695551] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 69632 00:24:05.583 [2024-04-24 20:30:35.695562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0138 00:24:05.583 [2024-04-24 20:30:35.695575] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:05.583 [2024-04-24 20:30:35.695586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:05.583 [2024-04-24 20:30:35.695596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:05.583 [2024-04-24 20:30:35.695604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:05.584 [2024-04-24 20:30:35.695613] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:05.584 [2024-04-24 20:30:35.695622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.584 [2024-04-24 20:30:35.695632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:05.584 [2024-04-24 20:30:35.695643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:24:05.584 [2024-04-24 20:30:35.695653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.584 [2024-04-24 20:30:35.716089] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.584 [2024-04-24 20:30:35.716138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:05.584 [2024-04-24 20:30:35.716160] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.434 ms 00:24:05.584 [2024-04-24 20:30:35.716170] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.584 [2024-04-24 20:30:35.716437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.584 [2024-04-24 20:30:35.716448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:05.584 [2024-04-24 20:30:35.716459] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:24:05.584 [2024-04-24 20:30:35.716469] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.584 [2024-04-24 20:30:35.771395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.584 [2024-04-24 20:30:35.771466] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:05.584 [2024-04-24 20:30:35.771483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.584 [2024-04-24 20:30:35.771494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.584 [2024-04-24 20:30:35.771566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.584 [2024-04-24 20:30:35.771576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:05.584 [2024-04-24 20:30:35.771587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.584 [2024-04-24 20:30:35.771597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.584 [2024-04-24 20:30:35.771673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.584 [2024-04-24 20:30:35.771685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:05.584 [2024-04-24 20:30:35.771701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.584 [2024-04-24 20:30:35.771710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.584 [2024-04-24 20:30:35.771728] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.584 [2024-04-24 20:30:35.771738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:05.584 [2024-04-24 20:30:35.771748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.584 [2024-04-24 20:30:35.771758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.843 [2024-04-24 20:30:35.891010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.843 [2024-04-24 20:30:35.891086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:05.843 [2024-04-24 20:30:35.891101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.843 [2024-04-24 20:30:35.891112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.843 [2024-04-24 20:30:35.938415] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.843 [2024-04-24 20:30:35.938475] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:05.843 [2024-04-24 20:30:35.938492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.843 [2024-04-24 20:30:35.938502] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.843 [2024-04-24 20:30:35.938565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.843 [2024-04-24 20:30:35.938576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:05.843 [2024-04-24 20:30:35.938587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.843 [2024-04-24 20:30:35.938606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.843 [2024-04-24 20:30:35.938642] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.843 [2024-04-24 20:30:35.938652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:05.843 [2024-04-24 20:30:35.938663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.843 [2024-04-24 20:30:35.938672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.843 [2024-04-24 20:30:35.938776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.843 [2024-04-24 20:30:35.938789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:05.843 [2024-04-24 20:30:35.938799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.843 [2024-04-24 20:30:35.938812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.843 [2024-04-24 20:30:35.938851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.843 [2024-04-24 20:30:35.938881] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:05.843 [2024-04-24 20:30:35.938892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.843 [2024-04-24 20:30:35.938901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.843 [2024-04-24 20:30:35.938962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.843 [2024-04-24 20:30:35.938974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:05.843 [2024-04-24 20:30:35.938985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.843 [2024-04-24 20:30:35.938995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.843 [2024-04-24 20:30:35.939044] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:05.843 [2024-04-24 20:30:35.939055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:05.843 [2024-04-24 20:30:35.939066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:05.843 [2024-04-24 20:30:35.939076] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.843 [2024-04-24 20:30:35.939199] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 575.592 ms, result 0 00:24:07.219 00:24:07.219 00:24:07.477 20:30:37 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:09.389 20:30:39 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:09.389 [2024-04-24 20:30:39.282026] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:24:09.389 [2024-04-24 20:30:39.282147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82400 ] 00:24:09.389 [2024-04-24 20:30:39.446318] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.648 [2024-04-24 20:30:39.686176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.907 [2024-04-24 20:30:40.108849] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:09.907 [2024-04-24 20:30:40.108927] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:10.167 [2024-04-24 20:30:40.264362] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.167 [2024-04-24 20:30:40.264420] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:10.167 [2024-04-24 20:30:40.264452] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:10.167 [2024-04-24 20:30:40.264463] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.167 [2024-04-24 20:30:40.264524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.167 [2024-04-24 20:30:40.264538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:10.167 [2024-04-24 20:30:40.264549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:24:10.167 [2024-04-24 20:30:40.264560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.167 [2024-04-24 20:30:40.264581] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:10.167 [2024-04-24 20:30:40.265810] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:10.167 [2024-04-24 20:30:40.265844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.167 [2024-04-24 20:30:40.265869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:10.167 [2024-04-24 20:30:40.265881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.268 ms 00:24:10.167 [2024-04-24 20:30:40.265891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.167 [2024-04-24 20:30:40.267354] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:10.167 [2024-04-24 20:30:40.287467] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.167 [2024-04-24 20:30:40.287510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:10.167 [2024-04-24 20:30:40.287533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.145 ms 00:24:10.167 [2024-04-24 20:30:40.287543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.167 [2024-04-24 20:30:40.287605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.167 [2024-04-24 20:30:40.287618] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:10.167 [2024-04-24 20:30:40.287630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:24:10.167 [2024-04-24 20:30:40.287640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.167 [2024-04-24 20:30:40.294630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.167 [2024-04-24 20:30:40.294665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:10.167 [2024-04-24 20:30:40.294678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.924 ms 00:24:10.167 [2024-04-24 20:30:40.294688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.167 [2024-04-24 20:30:40.294785] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.167 [2024-04-24 20:30:40.294801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:10.167 [2024-04-24 20:30:40.294811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:24:10.167 [2024-04-24 20:30:40.294821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.167 [2024-04-24 20:30:40.294875] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.167 [2024-04-24 20:30:40.294893] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:10.167 [2024-04-24 20:30:40.294903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:10.167 [2024-04-24 20:30:40.294922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.167 [2024-04-24 20:30:40.294950] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:10.167 [2024-04-24 20:30:40.300936] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.167 [2024-04-24 20:30:40.300968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:10.167 [2024-04-24 20:30:40.300980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.004 ms 00:24:10.167 [2024-04-24 20:30:40.300990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.167 [2024-04-24 20:30:40.301022] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.167 [2024-04-24 20:30:40.301032] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:10.167 [2024-04-24 20:30:40.301043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:10.167 [2024-04-24 20:30:40.301052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.167 [2024-04-24 20:30:40.301104] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:10.167 [2024-04-24 20:30:40.301133] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:10.167 [2024-04-24 20:30:40.301167] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:10.167 [2024-04-24 20:30:40.301185] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:10.167 [2024-04-24 20:30:40.301250] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:10.167 [2024-04-24 20:30:40.301273] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:10.167 [2024-04-24 20:30:40.301286] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:10.167 [2024-04-24 20:30:40.301299] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:10.167 [2024-04-24 20:30:40.301311] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:10.167 [2024-04-24 20:30:40.301326] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:10.167 [2024-04-24 20:30:40.301336] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:10.167 [2024-04-24 20:30:40.301347] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:10.167 [2024-04-24 20:30:40.301356] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:10.167 [2024-04-24 20:30:40.301367] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.167 [2024-04-24 20:30:40.301376] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:10.167 [2024-04-24 20:30:40.301387] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:24:10.167 [2024-04-24 20:30:40.301396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.167 [2024-04-24 20:30:40.301453] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.167 [2024-04-24 20:30:40.301463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:10.167 [2024-04-24 20:30:40.301477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:10.167 [2024-04-24 20:30:40.301486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.167 [2024-04-24 20:30:40.301551] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:10.167 [2024-04-24 20:30:40.301563] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:10.167 [2024-04-24 20:30:40.301573] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:10.167 [2024-04-24 20:30:40.301584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.167 [2024-04-24 20:30:40.301594] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:10.167 [2024-04-24 20:30:40.301604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:10.167 [2024-04-24 20:30:40.301613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:10.167 [2024-04-24 20:30:40.301622] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:10.167 [2024-04-24 20:30:40.301631] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:10.167 [2024-04-24 20:30:40.301640] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:10.167 [2024-04-24 20:30:40.301650] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:10.167 [2024-04-24 20:30:40.301660] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:10.167 [2024-04-24 20:30:40.301680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:10.167 [2024-04-24 20:30:40.301689] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:10.167 [2024-04-24 20:30:40.301699] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:10.167 [2024-04-24 20:30:40.301707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.167 [2024-04-24 20:30:40.301716] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:10.167 [2024-04-24 20:30:40.301725] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:10.167 [2024-04-24 20:30:40.301734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.167 [2024-04-24 20:30:40.301743] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:10.167 [2024-04-24 20:30:40.301752] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:10.167 [2024-04-24 20:30:40.301761] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:10.167 [2024-04-24 20:30:40.301770] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:10.167 [2024-04-24 20:30:40.301779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:10.167 [2024-04-24 20:30:40.301788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:10.167 [2024-04-24 20:30:40.301797] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:10.167 [2024-04-24 20:30:40.301806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:10.167 [2024-04-24 20:30:40.301814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:10.167 [2024-04-24 20:30:40.301823] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:10.167 [2024-04-24 20:30:40.301832] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:10.168 [2024-04-24 20:30:40.301840] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:10.168 [2024-04-24 20:30:40.301849] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:10.168 [2024-04-24 20:30:40.301881] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:10.168 [2024-04-24 20:30:40.301890] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:10.168 [2024-04-24 20:30:40.301900] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:10.168 [2024-04-24 20:30:40.301909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:10.168 [2024-04-24 20:30:40.301918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:10.168 [2024-04-24 20:30:40.301927] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:10.168 [2024-04-24 20:30:40.301936] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:10.168 [2024-04-24 20:30:40.301945] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:10.168 [2024-04-24 20:30:40.301953] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:10.168 [2024-04-24 20:30:40.301963] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:10.168 [2024-04-24 20:30:40.301977] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:10.168 [2024-04-24 20:30:40.301995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:10.168 [2024-04-24 20:30:40.302006] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:10.168 [2024-04-24 20:30:40.302016] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:10.168 [2024-04-24 20:30:40.302024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:10.168 [2024-04-24 20:30:40.302034] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:10.168 [2024-04-24 20:30:40.302046] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:10.168 [2024-04-24 20:30:40.302055] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:10.168 [2024-04-24 20:30:40.302065] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:10.168 [2024-04-24 20:30:40.302077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:10.168 [2024-04-24 20:30:40.302088] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:10.168 [2024-04-24 20:30:40.302098] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:10.168 [2024-04-24 20:30:40.302109] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:10.168 [2024-04-24 20:30:40.302119] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:10.168 [2024-04-24 20:30:40.302129] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:10.168 [2024-04-24 20:30:40.302139] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:10.168 [2024-04-24 20:30:40.302149] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:10.168 [2024-04-24 20:30:40.302160] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:10.168 [2024-04-24 20:30:40.302170] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:10.168 [2024-04-24 20:30:40.302180] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:10.168 [2024-04-24 20:30:40.302190] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:10.168 [2024-04-24 20:30:40.302200] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:10.168 [2024-04-24 20:30:40.302210] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:10.168 [2024-04-24 20:30:40.302220] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:10.168 [2024-04-24 20:30:40.302231] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:10.168 [2024-04-24 20:30:40.302242] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:10.168 [2024-04-24 20:30:40.302252] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:10.168 [2024-04-24 20:30:40.302262] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:10.168 [2024-04-24 20:30:40.302272] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:10.168 [2024-04-24 20:30:40.302282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.168 [2024-04-24 20:30:40.302292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:10.168 [2024-04-24 20:30:40.302302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.767 ms 00:24:10.168 [2024-04-24 20:30:40.302311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.168 [2024-04-24 20:30:40.326973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.168 [2024-04-24 20:30:40.327012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:10.168 [2024-04-24 20:30:40.327027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.654 ms 00:24:10.168 [2024-04-24 20:30:40.327037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.168 [2024-04-24 20:30:40.327121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.168 [2024-04-24 20:30:40.327136] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:10.168 [2024-04-24 20:30:40.327147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:10.168 [2024-04-24 20:30:40.327157] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.168 [2024-04-24 20:30:40.390826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.168 [2024-04-24 20:30:40.390890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:10.168 [2024-04-24 20:30:40.390906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.713 ms 00:24:10.168 [2024-04-24 20:30:40.390929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.168 [2024-04-24 20:30:40.390991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.168 [2024-04-24 20:30:40.391003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:10.168 [2024-04-24 20:30:40.391014] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:10.168 [2024-04-24 20:30:40.391024] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.168 [2024-04-24 20:30:40.391506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.168 [2024-04-24 20:30:40.391520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:10.168 [2024-04-24 20:30:40.391531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:24:10.168 [2024-04-24 20:30:40.391542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.168 [2024-04-24 20:30:40.391663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.168 [2024-04-24 20:30:40.391676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:10.168 [2024-04-24 20:30:40.391686] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:24:10.168 [2024-04-24 20:30:40.391696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.426 [2024-04-24 20:30:40.414754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.426 [2024-04-24 20:30:40.414808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:10.426 [2024-04-24 20:30:40.414824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.071 ms 00:24:10.426 [2024-04-24 20:30:40.414835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.426 [2024-04-24 20:30:40.434579] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:10.426 [2024-04-24 20:30:40.434635] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:10.426 [2024-04-24 20:30:40.434651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.426 [2024-04-24 20:30:40.434663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:10.426 [2024-04-24 20:30:40.434676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.691 ms 00:24:10.426 [2024-04-24 20:30:40.434686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.426 [2024-04-24 20:30:40.466100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.426 [2024-04-24 20:30:40.466159] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:10.427 [2024-04-24 20:30:40.466174] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.409 ms 00:24:10.427 [2024-04-24 20:30:40.466185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.427 [2024-04-24 20:30:40.485861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.427 [2024-04-24 20:30:40.485905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:10.427 [2024-04-24 20:30:40.485919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.636 ms 00:24:10.427 [2024-04-24 20:30:40.485929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.427 [2024-04-24 20:30:40.505492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.427 [2024-04-24 20:30:40.505536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:10.427 [2024-04-24 20:30:40.505551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.534 ms 00:24:10.427 [2024-04-24 20:30:40.505561] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.427 [2024-04-24 20:30:40.506063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.427 [2024-04-24 20:30:40.506078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:10.427 [2024-04-24 20:30:40.506090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:24:10.427 [2024-04-24 20:30:40.506100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.427 [2024-04-24 20:30:40.608665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.427 [2024-04-24 20:30:40.608735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:10.427 [2024-04-24 20:30:40.608753] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.708 ms 00:24:10.427 [2024-04-24 20:30:40.608764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.427 [2024-04-24 20:30:40.623117] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:10.427 [2024-04-24 20:30:40.626551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.427 [2024-04-24 20:30:40.626592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:10.427 [2024-04-24 20:30:40.626607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.736 ms 00:24:10.427 [2024-04-24 20:30:40.626619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.427 [2024-04-24 20:30:40.626738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.427 [2024-04-24 20:30:40.626760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:10.427 [2024-04-24 20:30:40.626771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:10.427 [2024-04-24 20:30:40.626781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.427 [2024-04-24 20:30:40.628010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.427 [2024-04-24 20:30:40.628051] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:10.427 [2024-04-24 20:30:40.628063] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.189 ms 00:24:10.427 [2024-04-24 20:30:40.628073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.427 [2024-04-24 20:30:40.630257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.427 [2024-04-24 20:30:40.630285] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:10.427 [2024-04-24 20:30:40.630300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.155 ms 00:24:10.427 [2024-04-24 20:30:40.630311] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.427 [2024-04-24 20:30:40.630345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.427 [2024-04-24 20:30:40.630356] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:10.427 [2024-04-24 20:30:40.630367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:10.427 [2024-04-24 20:30:40.630377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.427 [2024-04-24 20:30:40.630420] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:10.427 [2024-04-24 20:30:40.630433] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.427 [2024-04-24 20:30:40.630443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:10.427 [2024-04-24 20:30:40.630453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:10.427 [2024-04-24 20:30:40.630466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.685 [2024-04-24 20:30:40.668807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.685 [2024-04-24 20:30:40.668860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:10.685 [2024-04-24 20:30:40.668875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.382 ms 00:24:10.685 [2024-04-24 20:30:40.668886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.685 [2024-04-24 20:30:40.668963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:10.685 [2024-04-24 20:30:40.668981] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:10.685 [2024-04-24 20:30:40.669002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:10.685 [2024-04-24 20:30:40.669013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:10.685 [2024-04-24 20:30:40.671771] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.249 ms, result 0 00:24:42.042  Copying: 1720/1048576 [kB] (1720 kBps) Copying: 5364/1048576 [kB] (3644 kBps) Copying: 32/1024 [MB] (27 MBps) Copying: 70/1024 [MB] (37 MBps) Copying: 108/1024 [MB] (37 MBps) Copying: 145/1024 [MB] (37 MBps) Copying: 182/1024 [MB] (36 MBps) Copying: 219/1024 [MB] (36 MBps) Copying: 256/1024 [MB] (37 MBps) Copying: 294/1024 [MB] (38 MBps) Copying: 331/1024 [MB] (36 MBps) Copying: 364/1024 [MB] (33 MBps) Copying: 402/1024 [MB] (37 MBps) Copying: 439/1024 [MB] (37 MBps) Copying: 476/1024 [MB] (36 MBps) Copying: 513/1024 [MB] (37 MBps) Copying: 551/1024 [MB] (38 MBps) Copying: 587/1024 [MB] (35 MBps) Copying: 622/1024 [MB] (34 MBps) Copying: 656/1024 [MB] (34 MBps) Copying: 690/1024 [MB] (33 MBps) Copying: 726/1024 [MB] (35 MBps) Copying: 765/1024 [MB] (39 MBps) Copying: 801/1024 [MB] (36 MBps) Copying: 838/1024 [MB] (37 MBps) Copying: 875/1024 [MB] (37 MBps) Copying: 914/1024 [MB] (38 MBps) Copying: 952/1024 [MB] (38 MBps) Copying: 988/1024 [MB] (35 MBps) Copying: 1023/1024 [MB] (35 MBps) Copying: 1024/1024 [MB] (average 34 MBps)[2024-04-24 20:31:12.093805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.042 [2024-04-24 20:31:12.093908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:42.042 [2024-04-24 20:31:12.093934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:42.042 [2024-04-24 20:31:12.093951] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.042 [2024-04-24 20:31:12.093996] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:42.042 [2024-04-24 20:31:12.098322] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.042 [2024-04-24 20:31:12.098364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:42.042 [2024-04-24 20:31:12.098379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.306 ms 00:24:42.042 [2024-04-24 20:31:12.098390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.042 [2024-04-24 20:31:12.098618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.042 [2024-04-24 20:31:12.098632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:42.043 [2024-04-24 20:31:12.098644] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:24:42.043 [2024-04-24 20:31:12.098655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.043 [2024-04-24 20:31:12.119296] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.043 [2024-04-24 20:31:12.119361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:42.043 [2024-04-24 20:31:12.119389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.652 ms 00:24:42.043 [2024-04-24 20:31:12.119416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.043 [2024-04-24 20:31:12.124746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.043 [2024-04-24 20:31:12.124787] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:42.043 [2024-04-24 20:31:12.124800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.299 ms 00:24:42.043 [2024-04-24 20:31:12.124810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.043 [2024-04-24 20:31:12.164751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.043 [2024-04-24 20:31:12.164809] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:42.043 [2024-04-24 20:31:12.164825] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.931 ms 00:24:42.043 [2024-04-24 20:31:12.164835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.043 [2024-04-24 20:31:12.187458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.043 [2024-04-24 20:31:12.187507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:42.043 [2024-04-24 20:31:12.187530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.600 ms 00:24:42.043 [2024-04-24 20:31:12.187541] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.043 [2024-04-24 20:31:12.192016] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.043 [2024-04-24 20:31:12.192055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:42.043 [2024-04-24 20:31:12.192068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.433 ms 00:24:42.043 [2024-04-24 20:31:12.192078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.043 [2024-04-24 20:31:12.230597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.043 [2024-04-24 20:31:12.230648] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:42.043 [2024-04-24 20:31:12.230663] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.561 ms 00:24:42.043 [2024-04-24 20:31:12.230672] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.043 [2024-04-24 20:31:12.270673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.043 [2024-04-24 20:31:12.270728] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:42.043 [2024-04-24 20:31:12.270744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.015 ms 00:24:42.043 [2024-04-24 20:31:12.270755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.313 [2024-04-24 20:31:12.312928] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.313 [2024-04-24 20:31:12.312996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:42.313 [2024-04-24 20:31:12.313012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.157 ms 00:24:42.313 [2024-04-24 20:31:12.313023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.313 [2024-04-24 20:31:12.354540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.313 [2024-04-24 20:31:12.354609] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:42.313 [2024-04-24 20:31:12.354627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.450 ms 00:24:42.313 [2024-04-24 20:31:12.354638] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.313 [2024-04-24 20:31:12.354707] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:42.313 [2024-04-24 20:31:12.354728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:42.313 [2024-04-24 20:31:12.354742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:24:42.313 [2024-04-24 20:31:12.354754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.354992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:42.313 [2024-04-24 20:31:12.355184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:42.314 [2024-04-24 20:31:12.355819] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:42.314 [2024-04-24 20:31:12.355829] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8826b6e1-a783-4624-b364-c3beb04252a6 00:24:42.314 [2024-04-24 20:31:12.355840] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:24:42.314 [2024-04-24 20:31:12.355850] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 197056 00:24:42.314 [2024-04-24 20:31:12.355867] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 195072 00:24:42.314 [2024-04-24 20:31:12.355878] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0102 00:24:42.314 [2024-04-24 20:31:12.355904] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:42.314 [2024-04-24 20:31:12.355932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:42.314 [2024-04-24 20:31:12.355959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:42.314 [2024-04-24 20:31:12.355969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:42.314 [2024-04-24 20:31:12.355978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:42.314 [2024-04-24 20:31:12.355989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.314 [2024-04-24 20:31:12.355999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:42.314 [2024-04-24 20:31:12.356011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:24:42.314 [2024-04-24 20:31:12.356021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.314 [2024-04-24 20:31:12.376458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.314 [2024-04-24 20:31:12.376518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:42.314 [2024-04-24 20:31:12.376533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.419 ms 00:24:42.314 [2024-04-24 20:31:12.376544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.314 [2024-04-24 20:31:12.376830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:42.314 [2024-04-24 20:31:12.376842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:42.314 [2024-04-24 20:31:12.376885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:24:42.314 [2024-04-24 20:31:12.376895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.314 [2024-04-24 20:31:12.430496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.314 [2024-04-24 20:31:12.430559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:42.314 [2024-04-24 20:31:12.430579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.314 [2024-04-24 20:31:12.430590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.314 [2024-04-24 20:31:12.430663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.314 [2024-04-24 20:31:12.430674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:42.314 [2024-04-24 20:31:12.430684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.314 [2024-04-24 20:31:12.430693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.314 [2024-04-24 20:31:12.430766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.314 [2024-04-24 20:31:12.430778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:42.314 [2024-04-24 20:31:12.430788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.314 [2024-04-24 20:31:12.430803] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.314 [2024-04-24 20:31:12.430820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.314 [2024-04-24 20:31:12.430830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:42.314 [2024-04-24 20:31:12.430839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.314 [2024-04-24 20:31:12.430849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.572 [2024-04-24 20:31:12.549469] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.572 [2024-04-24 20:31:12.549536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:42.572 [2024-04-24 20:31:12.549553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.572 [2024-04-24 20:31:12.549568] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.572 [2024-04-24 20:31:12.597926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.572 [2024-04-24 20:31:12.597997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:42.572 [2024-04-24 20:31:12.598012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.572 [2024-04-24 20:31:12.598023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.572 [2024-04-24 20:31:12.598092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.572 [2024-04-24 20:31:12.598104] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:42.572 [2024-04-24 20:31:12.598115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.572 [2024-04-24 20:31:12.598125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.572 [2024-04-24 20:31:12.598173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.572 [2024-04-24 20:31:12.598184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:42.572 [2024-04-24 20:31:12.598194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.572 [2024-04-24 20:31:12.598204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.572 [2024-04-24 20:31:12.598308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.572 [2024-04-24 20:31:12.598320] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:42.572 [2024-04-24 20:31:12.598330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.572 [2024-04-24 20:31:12.598340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.572 [2024-04-24 20:31:12.598379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.572 [2024-04-24 20:31:12.598396] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:42.572 [2024-04-24 20:31:12.598406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.572 [2024-04-24 20:31:12.598416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.572 [2024-04-24 20:31:12.598454] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.572 [2024-04-24 20:31:12.598465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:42.572 [2024-04-24 20:31:12.598475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.572 [2024-04-24 20:31:12.598485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.572 [2024-04-24 20:31:12.598534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:42.572 [2024-04-24 20:31:12.598546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:42.572 [2024-04-24 20:31:12.598556] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:42.572 [2024-04-24 20:31:12.598565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:42.572 [2024-04-24 20:31:12.598685] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 505.678 ms, result 0 00:24:43.947 00:24:43.947 00:24:43.947 20:31:13 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:45.873 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:45.873 20:31:15 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:45.873 [2024-04-24 20:31:15.797470] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:24:45.873 [2024-04-24 20:31:15.797594] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82771 ] 00:24:45.873 [2024-04-24 20:31:15.969322] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.132 [2024-04-24 20:31:16.207368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.390 [2024-04-24 20:31:16.619084] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:46.390 [2024-04-24 20:31:16.619149] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:46.658 [2024-04-24 20:31:16.774155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.658 [2024-04-24 20:31:16.774218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:46.658 [2024-04-24 20:31:16.774234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:46.658 [2024-04-24 20:31:16.774245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.658 [2024-04-24 20:31:16.774307] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.658 [2024-04-24 20:31:16.774321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:46.658 [2024-04-24 20:31:16.774331] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:46.658 [2024-04-24 20:31:16.774341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.658 [2024-04-24 20:31:16.774363] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:46.658 [2024-04-24 20:31:16.775548] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:46.658 [2024-04-24 20:31:16.775579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.658 [2024-04-24 20:31:16.775590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:46.658 [2024-04-24 20:31:16.775601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.222 ms 00:24:46.658 [2024-04-24 20:31:16.775612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.658 [2024-04-24 20:31:16.777062] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:46.658 [2024-04-24 20:31:16.796479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.658 [2024-04-24 20:31:16.796520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:46.658 [2024-04-24 20:31:16.796541] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.449 ms 00:24:46.658 [2024-04-24 20:31:16.796552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.658 [2024-04-24 20:31:16.796611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.658 [2024-04-24 20:31:16.796625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:46.658 [2024-04-24 20:31:16.796635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:46.658 [2024-04-24 20:31:16.796645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.658 [2024-04-24 20:31:16.803437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.658 [2024-04-24 20:31:16.803469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:46.658 [2024-04-24 20:31:16.803481] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.730 ms 00:24:46.658 [2024-04-24 20:31:16.803492] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.658 [2024-04-24 20:31:16.803587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.658 [2024-04-24 20:31:16.803601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:46.658 [2024-04-24 20:31:16.803612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:24:46.658 [2024-04-24 20:31:16.803622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.658 [2024-04-24 20:31:16.803666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.658 [2024-04-24 20:31:16.803681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:46.658 [2024-04-24 20:31:16.803691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:46.658 [2024-04-24 20:31:16.803700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.658 [2024-04-24 20:31:16.803728] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:46.658 [2024-04-24 20:31:16.809444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.658 [2024-04-24 20:31:16.809477] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:46.658 [2024-04-24 20:31:16.809489] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.732 ms 00:24:46.658 [2024-04-24 20:31:16.809499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.658 [2024-04-24 20:31:16.809531] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.658 [2024-04-24 20:31:16.809542] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:46.658 [2024-04-24 20:31:16.809552] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:46.658 [2024-04-24 20:31:16.809562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.658 [2024-04-24 20:31:16.809611] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:46.658 [2024-04-24 20:31:16.809639] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:46.658 [2024-04-24 20:31:16.809672] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:46.658 [2024-04-24 20:31:16.809690] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:46.658 [2024-04-24 20:31:16.809754] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:46.658 [2024-04-24 20:31:16.809767] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:46.658 [2024-04-24 20:31:16.809780] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:46.658 [2024-04-24 20:31:16.809793] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:46.658 [2024-04-24 20:31:16.809804] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:46.658 [2024-04-24 20:31:16.809819] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:46.658 [2024-04-24 20:31:16.809829] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:46.658 [2024-04-24 20:31:16.809839] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:46.659 [2024-04-24 20:31:16.809849] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:46.659 [2024-04-24 20:31:16.809874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.659 [2024-04-24 20:31:16.809885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:46.659 [2024-04-24 20:31:16.809895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:24:46.659 [2024-04-24 20:31:16.809905] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.659 [2024-04-24 20:31:16.809960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.659 [2024-04-24 20:31:16.809971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:46.659 [2024-04-24 20:31:16.809984] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:46.659 [2024-04-24 20:31:16.809994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.659 [2024-04-24 20:31:16.810059] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:46.659 [2024-04-24 20:31:16.810071] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:46.659 [2024-04-24 20:31:16.810082] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:46.659 [2024-04-24 20:31:16.810092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.659 [2024-04-24 20:31:16.810102] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:46.659 [2024-04-24 20:31:16.810111] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:46.659 [2024-04-24 20:31:16.810121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:46.659 [2024-04-24 20:31:16.810130] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:46.659 [2024-04-24 20:31:16.810139] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:46.659 [2024-04-24 20:31:16.810148] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:46.659 [2024-04-24 20:31:16.810157] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:46.659 [2024-04-24 20:31:16.810166] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:46.659 [2024-04-24 20:31:16.810188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:46.659 [2024-04-24 20:31:16.810197] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:46.659 [2024-04-24 20:31:16.810206] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:46.659 [2024-04-24 20:31:16.810215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.659 [2024-04-24 20:31:16.810224] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:46.659 [2024-04-24 20:31:16.810233] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:46.659 [2024-04-24 20:31:16.810242] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.659 [2024-04-24 20:31:16.810251] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:46.659 [2024-04-24 20:31:16.810261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:46.659 [2024-04-24 20:31:16.810270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:46.659 [2024-04-24 20:31:16.810279] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:46.659 [2024-04-24 20:31:16.810288] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:46.659 [2024-04-24 20:31:16.810297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:46.659 [2024-04-24 20:31:16.810305] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:46.659 [2024-04-24 20:31:16.810315] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:46.659 [2024-04-24 20:31:16.810324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:46.659 [2024-04-24 20:31:16.810332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:46.659 [2024-04-24 20:31:16.810341] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:46.659 [2024-04-24 20:31:16.810350] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:46.659 [2024-04-24 20:31:16.810359] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:46.659 [2024-04-24 20:31:16.810368] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:46.659 [2024-04-24 20:31:16.810376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:46.659 [2024-04-24 20:31:16.810385] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:46.659 [2024-04-24 20:31:16.810394] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:46.659 [2024-04-24 20:31:16.810403] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:46.659 [2024-04-24 20:31:16.810411] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:46.659 [2024-04-24 20:31:16.810420] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:46.659 [2024-04-24 20:31:16.810429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:46.659 [2024-04-24 20:31:16.810437] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:46.659 [2024-04-24 20:31:16.810447] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:46.659 [2024-04-24 20:31:16.810461] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:46.659 [2024-04-24 20:31:16.810475] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:46.659 [2024-04-24 20:31:16.810486] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:46.659 [2024-04-24 20:31:16.810495] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:46.659 [2024-04-24 20:31:16.810505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:46.659 [2024-04-24 20:31:16.810514] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:46.659 [2024-04-24 20:31:16.810522] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:46.659 [2024-04-24 20:31:16.810531] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:46.659 [2024-04-24 20:31:16.810541] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:46.659 [2024-04-24 20:31:16.810553] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:46.659 [2024-04-24 20:31:16.810564] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:46.659 [2024-04-24 20:31:16.810574] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:46.659 [2024-04-24 20:31:16.810585] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:46.659 [2024-04-24 20:31:16.810595] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:46.659 [2024-04-24 20:31:16.810606] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:46.659 [2024-04-24 20:31:16.810616] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:46.659 [2024-04-24 20:31:16.810626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:46.659 [2024-04-24 20:31:16.810635] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:46.659 [2024-04-24 20:31:16.810646] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:46.659 [2024-04-24 20:31:16.810656] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:46.659 [2024-04-24 20:31:16.810666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:46.659 [2024-04-24 20:31:16.810676] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:46.659 [2024-04-24 20:31:16.810686] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:46.659 [2024-04-24 20:31:16.810696] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:46.659 [2024-04-24 20:31:16.810706] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:46.659 [2024-04-24 20:31:16.810717] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:46.659 [2024-04-24 20:31:16.810727] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:46.659 [2024-04-24 20:31:16.810737] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:46.659 [2024-04-24 20:31:16.810747] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:46.659 [2024-04-24 20:31:16.810757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.659 [2024-04-24 20:31:16.810767] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:46.659 [2024-04-24 20:31:16.810777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.734 ms 00:24:46.659 [2024-04-24 20:31:16.810786] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.659 [2024-04-24 20:31:16.835244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.659 [2024-04-24 20:31:16.835278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:46.659 [2024-04-24 20:31:16.835291] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.457 ms 00:24:46.659 [2024-04-24 20:31:16.835301] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.659 [2024-04-24 20:31:16.835377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.659 [2024-04-24 20:31:16.835392] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:46.659 [2024-04-24 20:31:16.835403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:46.659 [2024-04-24 20:31:16.835412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:16.898558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:16.898610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:46.942 [2024-04-24 20:31:16.898625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.196 ms 00:24:46.942 [2024-04-24 20:31:16.898640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:16.898700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:16.898711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:46.942 [2024-04-24 20:31:16.898721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:46.942 [2024-04-24 20:31:16.898732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:16.899238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:16.899261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:46.942 [2024-04-24 20:31:16.899273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:24:46.942 [2024-04-24 20:31:16.899283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:16.899407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:16.899421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:46.942 [2024-04-24 20:31:16.899432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:24:46.942 [2024-04-24 20:31:16.899442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:16.922729] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:16.922774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:46.942 [2024-04-24 20:31:16.922789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.301 ms 00:24:46.942 [2024-04-24 20:31:16.922800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:16.942712] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:46.942 [2024-04-24 20:31:16.942771] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:46.942 [2024-04-24 20:31:16.942786] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:16.942798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:46.942 [2024-04-24 20:31:16.942811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.871 ms 00:24:46.942 [2024-04-24 20:31:16.942838] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:16.973903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:16.973958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:46.942 [2024-04-24 20:31:16.973973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.055 ms 00:24:46.942 [2024-04-24 20:31:16.973985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:16.993600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:16.993639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:46.942 [2024-04-24 20:31:16.993652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.588 ms 00:24:46.942 [2024-04-24 20:31:16.993674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:17.012519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:17.012562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:46.942 [2024-04-24 20:31:17.012576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.835 ms 00:24:46.942 [2024-04-24 20:31:17.012586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:17.013094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:17.013116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:46.942 [2024-04-24 20:31:17.013128] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:24:46.942 [2024-04-24 20:31:17.013138] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:17.110783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:17.110877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:46.942 [2024-04-24 20:31:17.110901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.778 ms 00:24:46.942 [2024-04-24 20:31:17.110916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:17.124640] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:46.942 [2024-04-24 20:31:17.128405] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:17.128451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:46.942 [2024-04-24 20:31:17.128472] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.399 ms 00:24:46.942 [2024-04-24 20:31:17.128486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:17.128620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:17.128646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:46.942 [2024-04-24 20:31:17.128662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:46.942 [2024-04-24 20:31:17.128675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:17.129675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:17.129726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:46.942 [2024-04-24 20:31:17.129745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:24:46.942 [2024-04-24 20:31:17.129760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:17.132497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:17.132547] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:46.942 [2024-04-24 20:31:17.132568] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.690 ms 00:24:46.942 [2024-04-24 20:31:17.132581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:17.132630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:17.132643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:46.942 [2024-04-24 20:31:17.132656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:46.942 [2024-04-24 20:31:17.132669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:17.132732] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:46.942 [2024-04-24 20:31:17.132748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:17.132760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:46.942 [2024-04-24 20:31:17.132775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:46.942 [2024-04-24 20:31:17.132792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:17.167154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:17.167214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:46.942 [2024-04-24 20:31:17.167234] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.386 ms 00:24:46.942 [2024-04-24 20:31:17.167249] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:17.167343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:46.942 [2024-04-24 20:31:17.167370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:46.942 [2024-04-24 20:31:17.167386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:46.942 [2024-04-24 20:31:17.167400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:46.942 [2024-04-24 20:31:17.168855] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 394.763 ms, result 0 00:25:20.588  Copying: 33/1024 [MB] (33 MBps) Copying: 67/1024 [MB] (33 MBps) Copying: 99/1024 [MB] (32 MBps) Copying: 131/1024 [MB] (31 MBps) Copying: 165/1024 [MB] (34 MBps) Copying: 201/1024 [MB] (35 MBps) Copying: 235/1024 [MB] (33 MBps) Copying: 266/1024 [MB] (31 MBps) Copying: 299/1024 [MB] (33 MBps) Copying: 331/1024 [MB] (31 MBps) Copying: 364/1024 [MB] (33 MBps) Copying: 398/1024 [MB] (33 MBps) Copying: 433/1024 [MB] (34 MBps) Copying: 465/1024 [MB] (32 MBps) Copying: 500/1024 [MB] (34 MBps) Copying: 529/1024 [MB] (28 MBps) Copying: 557/1024 [MB] (28 MBps) Copying: 585/1024 [MB] (27 MBps) Copying: 615/1024 [MB] (29 MBps) Copying: 644/1024 [MB] (29 MBps) Copying: 675/1024 [MB] (30 MBps) Copying: 704/1024 [MB] (29 MBps) Copying: 733/1024 [MB] (29 MBps) Copying: 758/1024 [MB] (24 MBps) Copying: 786/1024 [MB] (28 MBps) Copying: 815/1024 [MB] (28 MBps) Copying: 842/1024 [MB] (27 MBps) Copying: 870/1024 [MB] (28 MBps) Copying: 899/1024 [MB] (28 MBps) Copying: 926/1024 [MB] (27 MBps) Copying: 956/1024 [MB] (29 MBps) Copying: 989/1024 [MB] (33 MBps) Copying: 1021/1024 [MB] (32 MBps) Copying: 1024/1024 [MB] (average 31 MBps)[2024-04-24 20:31:50.568326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.588 [2024-04-24 20:31:50.568438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:20.588 [2024-04-24 20:31:50.568477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:20.588 [2024-04-24 20:31:50.568494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.589 [2024-04-24 20:31:50.568535] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:20.589 [2024-04-24 20:31:50.575036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.589 [2024-04-24 20:31:50.575094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:20.589 [2024-04-24 20:31:50.575116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.481 ms 00:25:20.589 [2024-04-24 20:31:50.575134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.589 [2024-04-24 20:31:50.575569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.589 [2024-04-24 20:31:50.575614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:20.589 [2024-04-24 20:31:50.575635] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.388 ms 00:25:20.589 [2024-04-24 20:31:50.575653] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.589 [2024-04-24 20:31:50.580725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.589 [2024-04-24 20:31:50.580762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:20.589 [2024-04-24 20:31:50.580783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.053 ms 00:25:20.589 [2024-04-24 20:31:50.580801] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.589 [2024-04-24 20:31:50.587473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.589 [2024-04-24 20:31:50.587517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:20.589 [2024-04-24 20:31:50.587662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.649 ms 00:25:20.589 [2024-04-24 20:31:50.587675] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.589 [2024-04-24 20:31:50.627309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.589 [2024-04-24 20:31:50.627360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:20.589 [2024-04-24 20:31:50.627377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.599 ms 00:25:20.589 [2024-04-24 20:31:50.627388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.589 [2024-04-24 20:31:50.649802] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.589 [2024-04-24 20:31:50.649851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:20.589 [2024-04-24 20:31:50.649876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.402 ms 00:25:20.589 [2024-04-24 20:31:50.649887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.589 [2024-04-24 20:31:50.653736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.589 [2024-04-24 20:31:50.653784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:20.589 [2024-04-24 20:31:50.653797] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.807 ms 00:25:20.589 [2024-04-24 20:31:50.653808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.589 [2024-04-24 20:31:50.693109] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.589 [2024-04-24 20:31:50.693160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:20.589 [2024-04-24 20:31:50.693176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.340 ms 00:25:20.589 [2024-04-24 20:31:50.693187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.589 [2024-04-24 20:31:50.731864] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.589 [2024-04-24 20:31:50.731917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:20.589 [2024-04-24 20:31:50.731932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.687 ms 00:25:20.589 [2024-04-24 20:31:50.731944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.589 [2024-04-24 20:31:50.769926] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.589 [2024-04-24 20:31:50.769980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:20.589 [2024-04-24 20:31:50.769996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.997 ms 00:25:20.589 [2024-04-24 20:31:50.770007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.589 [2024-04-24 20:31:50.808188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.589 [2024-04-24 20:31:50.808243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:20.589 [2024-04-24 20:31:50.808259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.145 ms 00:25:20.589 [2024-04-24 20:31:50.808270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.589 [2024-04-24 20:31:50.808314] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:20.589 [2024-04-24 20:31:50.808362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:20.589 [2024-04-24 20:31:50.808380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3584 / 261120 wr_cnt: 1 state: open 00:25:20.589 [2024-04-24 20:31:50.808393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.808989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:20.589 [2024-04-24 20:31:50.809513] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:20.589 [2024-04-24 20:31:50.809525] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8826b6e1-a783-4624-b364-c3beb04252a6 00:25:20.589 [2024-04-24 20:31:50.809537] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264704 00:25:20.589 [2024-04-24 20:31:50.809548] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:20.589 [2024-04-24 20:31:50.809565] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:20.589 [2024-04-24 20:31:50.809590] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:20.589 [2024-04-24 20:31:50.809600] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:20.589 [2024-04-24 20:31:50.809611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:20.589 [2024-04-24 20:31:50.809621] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:20.589 [2024-04-24 20:31:50.809631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:20.589 [2024-04-24 20:31:50.809641] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:20.589 [2024-04-24 20:31:50.809652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.589 [2024-04-24 20:31:50.809663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:20.590 [2024-04-24 20:31:50.809674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:25:20.590 [2024-04-24 20:31:50.809685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.848 [2024-04-24 20:31:50.830788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.848 [2024-04-24 20:31:50.830839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:20.848 [2024-04-24 20:31:50.830867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.095 ms 00:25:20.848 [2024-04-24 20:31:50.830878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.848 [2024-04-24 20:31:50.831182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.848 [2024-04-24 20:31:50.831197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:20.848 [2024-04-24 20:31:50.831208] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:25:20.848 [2024-04-24 20:31:50.831220] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.848 [2024-04-24 20:31:50.888687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.848 [2024-04-24 20:31:50.888737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:20.848 [2024-04-24 20:31:50.888751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.848 [2024-04-24 20:31:50.888763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.848 [2024-04-24 20:31:50.888851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.848 [2024-04-24 20:31:50.888877] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:20.848 [2024-04-24 20:31:50.888889] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.848 [2024-04-24 20:31:50.888900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.849 [2024-04-24 20:31:50.888983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.849 [2024-04-24 20:31:50.888998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:20.849 [2024-04-24 20:31:50.889009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.849 [2024-04-24 20:31:50.889021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.849 [2024-04-24 20:31:50.889039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.849 [2024-04-24 20:31:50.889050] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:20.849 [2024-04-24 20:31:50.889060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.849 [2024-04-24 20:31:50.889070] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.849 [2024-04-24 20:31:51.016806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.849 [2024-04-24 20:31:51.016897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:20.849 [2024-04-24 20:31:51.016915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.849 [2024-04-24 20:31:51.016927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.849 [2024-04-24 20:31:51.067344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.849 [2024-04-24 20:31:51.067421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:20.849 [2024-04-24 20:31:51.067437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.849 [2024-04-24 20:31:51.067449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.849 [2024-04-24 20:31:51.067542] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.849 [2024-04-24 20:31:51.067567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:20.849 [2024-04-24 20:31:51.067579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.849 [2024-04-24 20:31:51.067590] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.849 [2024-04-24 20:31:51.067633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.849 [2024-04-24 20:31:51.067646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:20.849 [2024-04-24 20:31:51.067656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.849 [2024-04-24 20:31:51.067667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.849 [2024-04-24 20:31:51.067790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.849 [2024-04-24 20:31:51.067804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:20.849 [2024-04-24 20:31:51.067820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.849 [2024-04-24 20:31:51.067831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.849 [2024-04-24 20:31:51.067892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.849 [2024-04-24 20:31:51.067907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:20.849 [2024-04-24 20:31:51.067918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.849 [2024-04-24 20:31:51.067928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.849 [2024-04-24 20:31:51.067975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.849 [2024-04-24 20:31:51.067987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:20.849 [2024-04-24 20:31:51.068003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.849 [2024-04-24 20:31:51.068013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.849 [2024-04-24 20:31:51.068070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:20.849 [2024-04-24 20:31:51.068087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:20.849 [2024-04-24 20:31:51.068098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:20.849 [2024-04-24 20:31:51.068110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.849 [2024-04-24 20:31:51.068311] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 500.771 ms, result 0 00:25:22.300 00:25:22.300 00:25:22.300 20:31:52 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:24.202 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:25:24.202 20:31:54 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:25:24.202 20:31:54 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:25:24.202 20:31:54 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:24.202 20:31:54 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:24.202 20:31:54 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:24.461 20:31:54 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:24.461 20:31:54 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:25:24.461 20:31:54 -- ftl/dirty_shutdown.sh@37 -- # killprocess 81096 00:25:24.461 20:31:54 -- common/autotest_common.sh@936 -- # '[' -z 81096 ']' 00:25:24.461 Process with pid 81096 is not found 00:25:24.461 20:31:54 -- common/autotest_common.sh@940 -- # kill -0 81096 00:25:24.461 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (81096) - No such process 00:25:24.461 20:31:54 -- common/autotest_common.sh@963 -- # echo 'Process with pid 81096 is not found' 00:25:24.461 20:31:54 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:25:24.719 Remove shared memory files 00:25:24.719 20:31:54 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:25:24.719 20:31:54 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:24.719 20:31:54 -- ftl/common.sh@205 -- # rm -f rm -f 00:25:24.719 20:31:54 -- ftl/common.sh@206 -- # rm -f rm -f 00:25:24.719 20:31:54 -- ftl/common.sh@207 -- # rm -f rm -f 00:25:24.719 20:31:54 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:24.719 20:31:54 -- ftl/common.sh@209 -- # rm -f rm -f 00:25:24.719 ************************************ 00:25:24.719 END TEST ftl_dirty_shutdown 00:25:24.719 ************************************ 00:25:24.719 00:25:24.719 real 3m18.223s 00:25:24.719 user 3m43.906s 00:25:24.719 sys 0m34.924s 00:25:24.719 20:31:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:24.719 20:31:54 -- common/autotest_common.sh@10 -- # set +x 00:25:24.719 20:31:54 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:24.719 20:31:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:25:24.719 20:31:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:24.719 20:31:54 -- common/autotest_common.sh@10 -- # set +x 00:25:24.978 ************************************ 00:25:24.978 START TEST ftl_upgrade_shutdown 00:25:24.978 ************************************ 00:25:24.978 20:31:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:25:24.978 * Looking for test storage... 00:25:24.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:24.978 20:31:55 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:25:24.978 20:31:55 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:24.978 20:31:55 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:24.978 20:31:55 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:24.978 20:31:55 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:24.978 20:31:55 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:24.978 20:31:55 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:24.978 20:31:55 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:24.978 20:31:55 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:24.978 20:31:55 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:24.978 20:31:55 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:24.978 20:31:55 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:24.978 20:31:55 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:24.978 20:31:55 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:24.978 20:31:55 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:24.978 20:31:55 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:24.978 20:31:55 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:24.978 20:31:55 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:24.978 20:31:55 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:24.978 20:31:55 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:24.978 20:31:55 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:24.978 20:31:55 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:24.978 20:31:55 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:24.978 20:31:55 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:24.978 20:31:55 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:24.978 20:31:55 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:24.978 20:31:55 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:24.978 20:31:55 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:25:24.978 20:31:55 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:25:24.978 20:31:55 -- ftl/common.sh@81 -- # local base_bdev= 00:25:24.978 20:31:55 -- ftl/common.sh@82 -- # local cache_bdev= 00:25:24.978 20:31:55 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:24.978 20:31:55 -- ftl/common.sh@89 -- # spdk_tgt_pid=83228 00:25:24.978 20:31:55 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:25:24.978 20:31:55 -- ftl/common.sh@91 -- # waitforlisten 83228 00:25:24.978 20:31:55 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:25:24.978 20:31:55 -- common/autotest_common.sh@817 -- # '[' -z 83228 ']' 00:25:24.978 20:31:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.978 20:31:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:24.978 20:31:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.978 20:31:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:24.978 20:31:55 -- common/autotest_common.sh@10 -- # set +x 00:25:25.236 [2024-04-24 20:31:55.309210] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:25:25.236 [2024-04-24 20:31:55.309352] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83228 ] 00:25:25.495 [2024-04-24 20:31:55.485459] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.753 [2024-04-24 20:31:55.786476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.688 20:31:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:26.688 20:31:56 -- common/autotest_common.sh@850 -- # return 0 00:25:26.688 20:31:56 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:25:26.688 20:31:56 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:25:26.688 20:31:56 -- ftl/common.sh@99 -- # local params 00:25:26.688 20:31:56 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:26.688 20:31:56 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:25:26.688 20:31:56 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:26.688 20:31:56 -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:25:26.688 20:31:56 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:26.688 20:31:56 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:25:26.688 20:31:56 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:26.688 20:31:56 -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:25:26.688 20:31:56 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:26.688 20:31:56 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:25:26.688 20:31:56 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:25:26.688 20:31:56 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:25:26.688 20:31:56 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:25:26.688 20:31:56 -- ftl/common.sh@54 -- # local name=base 00:25:26.688 20:31:56 -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:26.688 20:31:56 -- ftl/common.sh@56 -- # local size=20480 00:25:26.688 20:31:56 -- ftl/common.sh@59 -- # local base_bdev 00:25:26.688 20:31:56 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:25:26.946 20:31:57 -- ftl/common.sh@60 -- # base_bdev=basen1 00:25:26.947 20:31:57 -- ftl/common.sh@62 -- # local base_size 00:25:27.205 20:31:57 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:25:27.205 20:31:57 -- common/autotest_common.sh@1364 -- # local bdev_name=basen1 00:25:27.205 20:31:57 -- common/autotest_common.sh@1365 -- # local bdev_info 00:25:27.205 20:31:57 -- common/autotest_common.sh@1366 -- # local bs 00:25:27.205 20:31:57 -- common/autotest_common.sh@1367 -- # local nb 00:25:27.205 20:31:57 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:25:27.205 20:31:57 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:25:27.205 { 00:25:27.205 "name": "basen1", 00:25:27.205 "aliases": [ 00:25:27.205 "0427c0be-312f-489a-85ad-33df5571896c" 00:25:27.205 ], 00:25:27.205 "product_name": "NVMe disk", 00:25:27.205 "block_size": 4096, 00:25:27.205 "num_blocks": 1310720, 00:25:27.205 "uuid": "0427c0be-312f-489a-85ad-33df5571896c", 00:25:27.205 "assigned_rate_limits": { 00:25:27.205 "rw_ios_per_sec": 0, 00:25:27.205 "rw_mbytes_per_sec": 0, 00:25:27.205 "r_mbytes_per_sec": 0, 00:25:27.205 "w_mbytes_per_sec": 0 00:25:27.205 }, 00:25:27.205 "claimed": true, 00:25:27.205 "claim_type": "read_many_write_one", 00:25:27.205 "zoned": false, 00:25:27.205 "supported_io_types": { 00:25:27.205 "read": true, 00:25:27.205 "write": true, 00:25:27.205 "unmap": true, 00:25:27.205 "write_zeroes": true, 00:25:27.205 "flush": true, 00:25:27.205 "reset": true, 00:25:27.205 "compare": true, 00:25:27.205 "compare_and_write": false, 00:25:27.205 "abort": true, 00:25:27.205 "nvme_admin": true, 00:25:27.205 "nvme_io": true 00:25:27.205 }, 00:25:27.205 "driver_specific": { 00:25:27.205 "nvme": [ 00:25:27.205 { 00:25:27.205 "pci_address": "0000:00:11.0", 00:25:27.205 "trid": { 00:25:27.205 "trtype": "PCIe", 00:25:27.205 "traddr": "0000:00:11.0" 00:25:27.205 }, 00:25:27.205 "ctrlr_data": { 00:25:27.205 "cntlid": 0, 00:25:27.205 "vendor_id": "0x1b36", 00:25:27.205 "model_number": "QEMU NVMe Ctrl", 00:25:27.205 "serial_number": "12341", 00:25:27.205 "firmware_revision": "8.0.0", 00:25:27.205 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:27.205 "oacs": { 00:25:27.205 "security": 0, 00:25:27.205 "format": 1, 00:25:27.205 "firmware": 0, 00:25:27.205 "ns_manage": 1 00:25:27.205 }, 00:25:27.205 "multi_ctrlr": false, 00:25:27.205 "ana_reporting": false 00:25:27.205 }, 00:25:27.205 "vs": { 00:25:27.205 "nvme_version": "1.4" 00:25:27.205 }, 00:25:27.205 "ns_data": { 00:25:27.205 "id": 1, 00:25:27.205 "can_share": false 00:25:27.205 } 00:25:27.205 } 00:25:27.205 ], 00:25:27.205 "mp_policy": "active_passive" 00:25:27.205 } 00:25:27.205 } 00:25:27.205 ]' 00:25:27.205 20:31:57 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:25:27.205 20:31:57 -- common/autotest_common.sh@1369 -- # bs=4096 00:25:27.205 20:31:57 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:25:27.464 20:31:57 -- common/autotest_common.sh@1370 -- # nb=1310720 00:25:27.464 20:31:57 -- common/autotest_common.sh@1373 -- # bdev_size=5120 00:25:27.464 20:31:57 -- common/autotest_common.sh@1374 -- # echo 5120 00:25:27.464 20:31:57 -- ftl/common.sh@63 -- # base_size=5120 00:25:27.464 20:31:57 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:25:27.464 20:31:57 -- ftl/common.sh@67 -- # clear_lvols 00:25:27.464 20:31:57 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:27.464 20:31:57 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:27.464 20:31:57 -- ftl/common.sh@28 -- # stores=4e8f5433-eebc-4d65-8c26-9f4c2b10b8f7 00:25:27.464 20:31:57 -- ftl/common.sh@29 -- # for lvs in $stores 00:25:27.464 20:31:57 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4e8f5433-eebc-4d65-8c26-9f4c2b10b8f7 00:25:27.722 20:31:57 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:25:27.982 20:31:58 -- ftl/common.sh@68 -- # lvs=fa7b0167-5bbc-44f0-97e8-22f465067706 00:25:27.982 20:31:58 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u fa7b0167-5bbc-44f0-97e8-22f465067706 00:25:28.240 20:31:58 -- ftl/common.sh@107 -- # base_bdev=4da60099-f39b-4970-a788-2d67fa5aeb68 00:25:28.240 20:31:58 -- ftl/common.sh@108 -- # [[ -z 4da60099-f39b-4970-a788-2d67fa5aeb68 ]] 00:25:28.240 20:31:58 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 4da60099-f39b-4970-a788-2d67fa5aeb68 5120 00:25:28.240 20:31:58 -- ftl/common.sh@35 -- # local name=cache 00:25:28.240 20:31:58 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:28.240 20:31:58 -- ftl/common.sh@37 -- # local base_bdev=4da60099-f39b-4970-a788-2d67fa5aeb68 00:25:28.240 20:31:58 -- ftl/common.sh@38 -- # local cache_size=5120 00:25:28.240 20:31:58 -- ftl/common.sh@41 -- # get_bdev_size 4da60099-f39b-4970-a788-2d67fa5aeb68 00:25:28.240 20:31:58 -- common/autotest_common.sh@1364 -- # local bdev_name=4da60099-f39b-4970-a788-2d67fa5aeb68 00:25:28.240 20:31:58 -- common/autotest_common.sh@1365 -- # local bdev_info 00:25:28.240 20:31:58 -- common/autotest_common.sh@1366 -- # local bs 00:25:28.240 20:31:58 -- common/autotest_common.sh@1367 -- # local nb 00:25:28.240 20:31:58 -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4da60099-f39b-4970-a788-2d67fa5aeb68 00:25:28.498 20:31:58 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:25:28.498 { 00:25:28.498 "name": "4da60099-f39b-4970-a788-2d67fa5aeb68", 00:25:28.498 "aliases": [ 00:25:28.498 "lvs/basen1p0" 00:25:28.498 ], 00:25:28.498 "product_name": "Logical Volume", 00:25:28.498 "block_size": 4096, 00:25:28.498 "num_blocks": 5242880, 00:25:28.498 "uuid": "4da60099-f39b-4970-a788-2d67fa5aeb68", 00:25:28.498 "assigned_rate_limits": { 00:25:28.498 "rw_ios_per_sec": 0, 00:25:28.498 "rw_mbytes_per_sec": 0, 00:25:28.498 "r_mbytes_per_sec": 0, 00:25:28.498 "w_mbytes_per_sec": 0 00:25:28.498 }, 00:25:28.498 "claimed": false, 00:25:28.498 "zoned": false, 00:25:28.498 "supported_io_types": { 00:25:28.498 "read": true, 00:25:28.498 "write": true, 00:25:28.498 "unmap": true, 00:25:28.498 "write_zeroes": true, 00:25:28.498 "flush": false, 00:25:28.498 "reset": true, 00:25:28.498 "compare": false, 00:25:28.498 "compare_and_write": false, 00:25:28.498 "abort": false, 00:25:28.498 "nvme_admin": false, 00:25:28.498 "nvme_io": false 00:25:28.498 }, 00:25:28.498 "driver_specific": { 00:25:28.498 "lvol": { 00:25:28.498 "lvol_store_uuid": "fa7b0167-5bbc-44f0-97e8-22f465067706", 00:25:28.498 "base_bdev": "basen1", 00:25:28.498 "thin_provision": true, 00:25:28.498 "snapshot": false, 00:25:28.498 "clone": false, 00:25:28.498 "esnap_clone": false 00:25:28.498 } 00:25:28.498 } 00:25:28.498 } 00:25:28.498 ]' 00:25:28.499 20:31:58 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:25:28.499 20:31:58 -- common/autotest_common.sh@1369 -- # bs=4096 00:25:28.499 20:31:58 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:25:28.499 20:31:58 -- common/autotest_common.sh@1370 -- # nb=5242880 00:25:28.499 20:31:58 -- common/autotest_common.sh@1373 -- # bdev_size=20480 00:25:28.499 20:31:58 -- common/autotest_common.sh@1374 -- # echo 20480 00:25:28.499 20:31:58 -- ftl/common.sh@41 -- # local base_size=1024 00:25:28.499 20:31:58 -- ftl/common.sh@44 -- # local nvc_bdev 00:25:28.499 20:31:58 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:25:28.756 20:31:58 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:25:28.756 20:31:58 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:25:28.756 20:31:58 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:25:29.014 20:31:59 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:25:29.014 20:31:59 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:25:29.014 20:31:59 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 4da60099-f39b-4970-a788-2d67fa5aeb68 -c cachen1p0 --l2p_dram_limit 2 00:25:29.274 [2024-04-24 20:31:59.254199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.274 [2024-04-24 20:31:59.254254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:25:29.274 [2024-04-24 20:31:59.254276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:25:29.274 [2024-04-24 20:31:59.254288] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.274 [2024-04-24 20:31:59.254351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.274 [2024-04-24 20:31:59.254363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:29.274 [2024-04-24 20:31:59.254380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:25:29.274 [2024-04-24 20:31:59.254391] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.274 [2024-04-24 20:31:59.254421] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:25:29.274 [2024-04-24 20:31:59.255696] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:25:29.274 [2024-04-24 20:31:59.255731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.274 [2024-04-24 20:31:59.255744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:29.274 [2024-04-24 20:31:59.255771] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.323 ms 00:25:29.274 [2024-04-24 20:31:59.255785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.274 [2024-04-24 20:31:59.255885] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID d6a7871b-4402-48fd-ac10-0a902c381bfb 00:25:29.274 [2024-04-24 20:31:59.257432] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.274 [2024-04-24 20:31:59.257470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:25:29.274 [2024-04-24 20:31:59.257483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:25:29.274 [2024-04-24 20:31:59.257497] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.274 [2024-04-24 20:31:59.265255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.274 [2024-04-24 20:31:59.265290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:29.274 [2024-04-24 20:31:59.265304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.711 ms 00:25:29.275 [2024-04-24 20:31:59.265318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.275 [2024-04-24 20:31:59.265369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.275 [2024-04-24 20:31:59.265386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:29.275 [2024-04-24 20:31:59.265398] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:25:29.275 [2024-04-24 20:31:59.265411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.275 [2024-04-24 20:31:59.265481] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.275 [2024-04-24 20:31:59.265499] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:25:29.275 [2024-04-24 20:31:59.265511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:25:29.275 [2024-04-24 20:31:59.265523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.275 [2024-04-24 20:31:59.265553] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:25:29.275 [2024-04-24 20:31:59.271917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.275 [2024-04-24 20:31:59.271968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:29.275 [2024-04-24 20:31:59.271985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.381 ms 00:25:29.275 [2024-04-24 20:31:59.271996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.275 [2024-04-24 20:31:59.272032] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.275 [2024-04-24 20:31:59.272043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:25:29.275 [2024-04-24 20:31:59.272057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:25:29.275 [2024-04-24 20:31:59.272067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.275 [2024-04-24 20:31:59.272122] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:25:29.275 [2024-04-24 20:31:59.272257] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:25:29.275 [2024-04-24 20:31:59.272278] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:25:29.275 [2024-04-24 20:31:59.272292] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:25:29.275 [2024-04-24 20:31:59.272312] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:25:29.275 [2024-04-24 20:31:59.272327] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:25:29.275 [2024-04-24 20:31:59.272341] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:25:29.275 [2024-04-24 20:31:59.272352] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:25:29.275 [2024-04-24 20:31:59.272365] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:25:29.275 [2024-04-24 20:31:59.272375] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:25:29.275 [2024-04-24 20:31:59.272391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.275 [2024-04-24 20:31:59.272402] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:25:29.275 [2024-04-24 20:31:59.272415] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:25:29.275 [2024-04-24 20:31:59.272426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.275 [2024-04-24 20:31:59.272505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.275 [2024-04-24 20:31:59.272517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:25:29.275 [2024-04-24 20:31:59.272560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:25:29.275 [2024-04-24 20:31:59.272571] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.275 [2024-04-24 20:31:59.272643] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:25:29.275 [2024-04-24 20:31:59.272656] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:25:29.275 [2024-04-24 20:31:59.272672] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:29.275 [2024-04-24 20:31:59.272683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:29.275 [2024-04-24 20:31:59.272697] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:25:29.275 [2024-04-24 20:31:59.272706] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:25:29.275 [2024-04-24 20:31:59.272719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:25:29.275 [2024-04-24 20:31:59.272729] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:25:29.275 [2024-04-24 20:31:59.272754] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:25:29.275 [2024-04-24 20:31:59.272763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:29.275 [2024-04-24 20:31:59.272774] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:25:29.275 [2024-04-24 20:31:59.272785] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:25:29.275 [2024-04-24 20:31:59.272796] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:29.275 [2024-04-24 20:31:59.272806] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:25:29.275 [2024-04-24 20:31:59.272820] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:25:29.275 [2024-04-24 20:31:59.272829] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:29.275 [2024-04-24 20:31:59.272841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:25:29.275 [2024-04-24 20:31:59.272850] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:25:29.275 [2024-04-24 20:31:59.272864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:29.275 [2024-04-24 20:31:59.272888] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:25:29.275 [2024-04-24 20:31:59.272899] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:25:29.275 [2024-04-24 20:31:59.272909] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:25:29.275 [2024-04-24 20:31:59.272920] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:25:29.275 [2024-04-24 20:31:59.272929] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:25:29.275 [2024-04-24 20:31:59.272941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:29.275 [2024-04-24 20:31:59.272950] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:25:29.275 [2024-04-24 20:31:59.272961] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:25:29.275 [2024-04-24 20:31:59.272970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:29.275 [2024-04-24 20:31:59.272982] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:25:29.275 [2024-04-24 20:31:59.272992] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:25:29.275 [2024-04-24 20:31:59.273003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:29.275 [2024-04-24 20:31:59.273012] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:25:29.275 [2024-04-24 20:31:59.273024] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:25:29.275 [2024-04-24 20:31:59.273033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:25:29.275 [2024-04-24 20:31:59.273046] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:25:29.275 [2024-04-24 20:31:59.273055] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:25:29.275 [2024-04-24 20:31:59.273084] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:29.275 [2024-04-24 20:31:59.273093] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:25:29.275 [2024-04-24 20:31:59.273105] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:25:29.275 [2024-04-24 20:31:59.273115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:29.275 [2024-04-24 20:31:59.273128] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:25:29.275 [2024-04-24 20:31:59.273138] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:25:29.275 [2024-04-24 20:31:59.273151] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:25:29.275 [2024-04-24 20:31:59.273165] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:25:29.275 [2024-04-24 20:31:59.273178] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:25:29.275 [2024-04-24 20:31:59.273189] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:25:29.275 [2024-04-24 20:31:59.273201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:25:29.275 [2024-04-24 20:31:59.273211] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:25:29.275 [2024-04-24 20:31:59.273223] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:25:29.275 [2024-04-24 20:31:59.273233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:25:29.275 [2024-04-24 20:31:59.273250] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:25:29.275 [2024-04-24 20:31:59.273263] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:29.275 [2024-04-24 20:31:59.273278] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:25:29.275 [2024-04-24 20:31:59.273289] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:25:29.275 [2024-04-24 20:31:59.273302] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:25:29.275 [2024-04-24 20:31:59.273314] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:25:29.275 [2024-04-24 20:31:59.273327] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:25:29.275 [2024-04-24 20:31:59.273338] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:25:29.275 [2024-04-24 20:31:59.273351] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:25:29.275 [2024-04-24 20:31:59.273362] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:25:29.275 [2024-04-24 20:31:59.273375] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:25:29.275 [2024-04-24 20:31:59.273386] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:25:29.275 [2024-04-24 20:31:59.273399] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:25:29.275 [2024-04-24 20:31:59.273410] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:25:29.275 [2024-04-24 20:31:59.273423] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:25:29.276 [2024-04-24 20:31:59.273434] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:25:29.276 [2024-04-24 20:31:59.273451] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:29.276 [2024-04-24 20:31:59.273463] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:29.276 [2024-04-24 20:31:59.273477] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:25:29.276 [2024-04-24 20:31:59.273487] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:25:29.276 [2024-04-24 20:31:59.273500] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:25:29.276 [2024-04-24 20:31:59.273512] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.276 [2024-04-24 20:31:59.273525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:25:29.276 [2024-04-24 20:31:59.273535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.910 ms 00:25:29.276 [2024-04-24 20:31:59.273549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.276 [2024-04-24 20:31:59.299687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.276 [2024-04-24 20:31:59.299736] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:29.276 [2024-04-24 20:31:59.299752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.122 ms 00:25:29.276 [2024-04-24 20:31:59.299765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.276 [2024-04-24 20:31:59.299824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.276 [2024-04-24 20:31:59.299841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:25:29.276 [2024-04-24 20:31:59.299853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:25:29.276 [2024-04-24 20:31:59.299884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.276 [2024-04-24 20:31:59.354968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.276 [2024-04-24 20:31:59.355038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:29.276 [2024-04-24 20:31:59.355054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 55.102 ms 00:25:29.276 [2024-04-24 20:31:59.355068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.276 [2024-04-24 20:31:59.355121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.276 [2024-04-24 20:31:59.355135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:29.276 [2024-04-24 20:31:59.355146] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:29.276 [2024-04-24 20:31:59.355163] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.276 [2024-04-24 20:31:59.355666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.276 [2024-04-24 20:31:59.355683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:29.276 [2024-04-24 20:31:59.355694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.436 ms 00:25:29.276 [2024-04-24 20:31:59.355707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.276 [2024-04-24 20:31:59.355757] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.276 [2024-04-24 20:31:59.355773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:29.276 [2024-04-24 20:31:59.355784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:25:29.276 [2024-04-24 20:31:59.355799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.276 [2024-04-24 20:31:59.380533] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.276 [2024-04-24 20:31:59.380589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:29.276 [2024-04-24 20:31:59.380606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 24.746 ms 00:25:29.276 [2024-04-24 20:31:59.380622] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.276 [2024-04-24 20:31:59.395634] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:25:29.276 [2024-04-24 20:31:59.396801] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.276 [2024-04-24 20:31:59.396829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:25:29.276 [2024-04-24 20:31:59.396847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.074 ms 00:25:29.276 [2024-04-24 20:31:59.396878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.276 [2024-04-24 20:31:59.446817] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:29.276 [2024-04-24 20:31:59.446889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:25:29.276 [2024-04-24 20:31:59.446910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 49.965 ms 00:25:29.276 [2024-04-24 20:31:59.446921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:29.276 [2024-04-24 20:31:59.447003] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:25:29.276 [2024-04-24 20:31:59.447019] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:25:34.557 [2024-04-24 20:32:04.401701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.557 [2024-04-24 20:32:04.401827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:25:34.557 [2024-04-24 20:32:04.401872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4962.723 ms 00:25:34.557 [2024-04-24 20:32:04.401887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.557 [2024-04-24 20:32:04.402067] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.557 [2024-04-24 20:32:04.402090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:25:34.557 [2024-04-24 20:32:04.402111] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.085 ms 00:25:34.557 [2024-04-24 20:32:04.402126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.557 [2024-04-24 20:32:04.449857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.557 [2024-04-24 20:32:04.449968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:25:34.557 [2024-04-24 20:32:04.449996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 47.668 ms 00:25:34.557 [2024-04-24 20:32:04.450011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.557 [2024-04-24 20:32:04.495297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.557 [2024-04-24 20:32:04.495390] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:25:34.557 [2024-04-24 20:32:04.495416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 45.260 ms 00:25:34.557 [2024-04-24 20:32:04.495429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.557 [2024-04-24 20:32:04.495992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.557 [2024-04-24 20:32:04.496025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:25:34.557 [2024-04-24 20:32:04.496057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.482 ms 00:25:34.557 [2024-04-24 20:32:04.496081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.557 [2024-04-24 20:32:04.625737] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.557 [2024-04-24 20:32:04.625830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:25:34.557 [2024-04-24 20:32:04.625870] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 129.731 ms 00:25:34.557 [2024-04-24 20:32:04.625886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.557 [2024-04-24 20:32:04.673498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.557 [2024-04-24 20:32:04.673587] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:25:34.557 [2024-04-24 20:32:04.673615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 47.577 ms 00:25:34.557 [2024-04-24 20:32:04.673630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.557 [2024-04-24 20:32:04.676554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.557 [2024-04-24 20:32:04.676593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:25:34.557 [2024-04-24 20:32:04.676613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.821 ms 00:25:34.557 [2024-04-24 20:32:04.676640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.557 [2024-04-24 20:32:04.721196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.557 [2024-04-24 20:32:04.721279] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:25:34.557 [2024-04-24 20:32:04.721305] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 44.495 ms 00:25:34.557 [2024-04-24 20:32:04.721318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.557 [2024-04-24 20:32:04.721422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.557 [2024-04-24 20:32:04.721437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:25:34.557 [2024-04-24 20:32:04.721457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:25:34.557 [2024-04-24 20:32:04.721475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.557 [2024-04-24 20:32:04.721649] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:34.557 [2024-04-24 20:32:04.721665] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:25:34.557 [2024-04-24 20:32:04.721682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:25:34.557 [2024-04-24 20:32:04.721695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:34.557 [2024-04-24 20:32:04.723363] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5477.408 ms, result 0 00:25:34.557 { 00:25:34.557 "name": "ftl", 00:25:34.557 "uuid": "d6a7871b-4402-48fd-ac10-0a902c381bfb" 00:25:34.557 } 00:25:34.557 20:32:04 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:25:34.815 [2024-04-24 20:32:04.929625] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.815 20:32:04 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:25:35.073 20:32:05 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:25:35.331 [2024-04-24 20:32:05.309398] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:25:35.331 20:32:05 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:25:35.331 [2024-04-24 20:32:05.492986] nvmf_rpc.c: 606:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:35.331 [2024-04-24 20:32:05.493479] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:35.331 20:32:05 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:35.897 Fill FTL, iteration 1 00:25:35.897 20:32:05 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:25:35.897 20:32:05 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:25:35.897 20:32:05 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:25:35.897 20:32:05 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:25:35.897 20:32:05 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:25:35.897 20:32:05 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:25:35.897 20:32:05 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:25:35.897 20:32:05 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:25:35.897 20:32:05 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:25:35.897 20:32:05 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:35.897 20:32:05 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:25:35.897 20:32:05 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:35.897 20:32:05 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:35.897 20:32:05 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:35.897 20:32:05 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:35.897 20:32:05 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:25:35.897 20:32:05 -- ftl/common.sh@163 -- # spdk_ini_pid=83379 00:25:35.897 20:32:05 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:25:35.897 20:32:05 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:25:35.897 20:32:05 -- ftl/common.sh@165 -- # waitforlisten 83379 /var/tmp/spdk.tgt.sock 00:25:35.897 20:32:05 -- common/autotest_common.sh@817 -- # '[' -z 83379 ']' 00:25:35.897 20:32:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:25:35.897 20:32:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:35.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:25:35.898 20:32:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:25:35.898 20:32:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:35.898 20:32:05 -- common/autotest_common.sh@10 -- # set +x 00:25:35.898 [2024-04-24 20:32:05.943996] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:25:35.898 [2024-04-24 20:32:05.944294] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83379 ] 00:25:35.898 [2024-04-24 20:32:06.114974] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.156 [2024-04-24 20:32:06.361693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.138 20:32:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:37.138 20:32:07 -- common/autotest_common.sh@850 -- # return 0 00:25:37.138 20:32:07 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:25:37.405 ftln1 00:25:37.405 20:32:07 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:25:37.405 20:32:07 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:25:37.663 20:32:07 -- ftl/common.sh@173 -- # echo ']}' 00:25:37.663 20:32:07 -- ftl/common.sh@176 -- # killprocess 83379 00:25:37.663 20:32:07 -- common/autotest_common.sh@936 -- # '[' -z 83379 ']' 00:25:37.663 20:32:07 -- common/autotest_common.sh@940 -- # kill -0 83379 00:25:37.663 20:32:07 -- common/autotest_common.sh@941 -- # uname 00:25:37.663 20:32:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:37.663 20:32:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83379 00:25:37.663 killing process with pid 83379 00:25:37.663 20:32:07 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:37.663 20:32:07 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:37.663 20:32:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83379' 00:25:37.663 20:32:07 -- common/autotest_common.sh@955 -- # kill 83379 00:25:37.663 20:32:07 -- common/autotest_common.sh@960 -- # wait 83379 00:25:40.195 20:32:10 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:25:40.195 20:32:10 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:25:40.196 [2024-04-24 20:32:10.314491] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:25:40.196 [2024-04-24 20:32:10.314609] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83436 ] 00:25:40.454 [2024-04-24 20:32:10.485606] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.712 [2024-04-24 20:32:10.732209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:46.339  Copying: 266/1024 [MB] (266 MBps) Copying: 528/1024 [MB] (262 MBps) Copying: 787/1024 [MB] (259 MBps) Copying: 1024/1024 [MB] (average 261 MBps) 00:25:46.339 00:25:46.339 Calculate MD5 checksum, iteration 1 00:25:46.339 20:32:16 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:25:46.339 20:32:16 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:25:46.339 20:32:16 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:46.339 20:32:16 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:46.339 20:32:16 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:46.339 20:32:16 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:46.339 20:32:16 -- ftl/common.sh@154 -- # return 0 00:25:46.339 20:32:16 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:25:46.339 [2024-04-24 20:32:16.515767] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:25:46.339 [2024-04-24 20:32:16.515896] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83504 ] 00:25:46.598 [2024-04-24 20:32:16.682889] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.864 [2024-04-24 20:32:16.915205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.565  Copying: 561/1024 [MB] (561 MBps) Copying: 1024/1024 [MB] (average 557 MBps) 00:25:50.565 00:25:50.565 20:32:20 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:25:50.566 20:32:20 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:52.509 20:32:22 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:25:52.509 Fill FTL, iteration 2 00:25:52.509 20:32:22 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=d262c4861a96b007ac72d67b9cb9f55a 00:25:52.509 20:32:22 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:25:52.509 20:32:22 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:25:52.509 20:32:22 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:25:52.509 20:32:22 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:52.509 20:32:22 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:52.509 20:32:22 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:52.509 20:32:22 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:52.509 20:32:22 -- ftl/common.sh@154 -- # return 0 00:25:52.509 20:32:22 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:25:52.509 [2024-04-24 20:32:22.311907] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:25:52.509 [2024-04-24 20:32:22.312082] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83568 ] 00:25:52.509 [2024-04-24 20:32:22.487880] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.768 [2024-04-24 20:32:22.781784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:58.843  Copying: 232/1024 [MB] (232 MBps) Copying: 466/1024 [MB] (234 MBps) Copying: 709/1024 [MB] (243 MBps) Copying: 954/1024 [MB] (245 MBps) Copying: 1024/1024 [MB] (average 238 MBps) 00:25:58.843 00:25:59.100 20:32:29 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:25:59.100 20:32:29 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:25:59.100 Calculate MD5 checksum, iteration 2 00:25:59.100 20:32:29 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:59.100 20:32:29 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:59.100 20:32:29 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:59.100 20:32:29 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:59.100 20:32:29 -- ftl/common.sh@154 -- # return 0 00:25:59.100 20:32:29 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:59.100 [2024-04-24 20:32:29.169394] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:25:59.100 [2024-04-24 20:32:29.169669] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83638 ] 00:25:59.359 [2024-04-24 20:32:29.342360] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.617 [2024-04-24 20:32:29.626526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.983  Copying: 619/1024 [MB] (619 MBps) Copying: 1024/1024 [MB] (average 615 MBps) 00:26:03.983 00:26:03.983 20:32:34 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:26:03.983 20:32:34 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:05.888 20:32:35 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:26:05.888 20:32:35 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c7f123b2f913ad811dabd6a0b92762e6 00:26:05.888 20:32:35 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:26:05.888 20:32:35 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:26:05.888 20:32:35 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:05.888 [2024-04-24 20:32:35.977700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.888 [2024-04-24 20:32:35.977781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:05.888 [2024-04-24 20:32:35.977803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:05.888 [2024-04-24 20:32:35.977816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.888 [2024-04-24 20:32:35.977871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.888 [2024-04-24 20:32:35.977890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:05.888 [2024-04-24 20:32:35.977921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:05.888 [2024-04-24 20:32:35.977946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.888 [2024-04-24 20:32:35.977975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:05.888 [2024-04-24 20:32:35.977988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:05.888 [2024-04-24 20:32:35.978001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:05.888 [2024-04-24 20:32:35.978013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:05.888 [2024-04-24 20:32:35.978098] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.388 ms, result 0 00:26:05.888 true 00:26:05.888 20:32:35 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:06.147 { 00:26:06.147 "name": "ftl", 00:26:06.147 "properties": [ 00:26:06.147 { 00:26:06.147 "name": "superblock_version", 00:26:06.147 "value": 5, 00:26:06.147 "read-only": true 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "name": "base_device", 00:26:06.147 "bands": [ 00:26:06.147 { 00:26:06.147 "id": 0, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 1, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 2, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 3, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 4, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 5, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 6, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 7, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 8, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 9, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 10, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 11, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 12, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 13, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 14, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 15, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 16, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 17, 00:26:06.147 "state": "FREE", 00:26:06.147 "validity": 0.0 00:26:06.147 } 00:26:06.147 ], 00:26:06.147 "read-only": true 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "name": "cache_device", 00:26:06.147 "type": "bdev", 00:26:06.147 "chunks": [ 00:26:06.147 { 00:26:06.147 "id": 0, 00:26:06.147 "state": "CLOSED", 00:26:06.147 "utilization": 1.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 1, 00:26:06.147 "state": "CLOSED", 00:26:06.147 "utilization": 1.0 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 2, 00:26:06.147 "state": "OPEN", 00:26:06.147 "utilization": 0.001953125 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "id": 3, 00:26:06.147 "state": "OPEN", 00:26:06.147 "utilization": 0.0 00:26:06.147 } 00:26:06.147 ], 00:26:06.147 "read-only": true 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "name": "verbose_mode", 00:26:06.147 "value": true, 00:26:06.147 "unit": "", 00:26:06.147 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:06.147 }, 00:26:06.147 { 00:26:06.147 "name": "prep_upgrade_on_shutdown", 00:26:06.147 "value": false, 00:26:06.147 "unit": "", 00:26:06.147 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:06.147 } 00:26:06.147 ] 00:26:06.147 } 00:26:06.147 20:32:36 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:26:06.147 [2024-04-24 20:32:36.361431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.147 [2024-04-24 20:32:36.361503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:06.147 [2024-04-24 20:32:36.361523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:06.147 [2024-04-24 20:32:36.361537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.147 [2024-04-24 20:32:36.361571] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.147 [2024-04-24 20:32:36.361585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:06.147 [2024-04-24 20:32:36.361597] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:06.147 [2024-04-24 20:32:36.361610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.147 [2024-04-24 20:32:36.361634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.147 [2024-04-24 20:32:36.361647] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:06.147 [2024-04-24 20:32:36.361660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:06.147 [2024-04-24 20:32:36.361671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.147 [2024-04-24 20:32:36.361744] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.313 ms, result 0 00:26:06.147 true 00:26:06.406 20:32:36 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:26:06.406 20:32:36 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:06.406 20:32:36 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:06.406 20:32:36 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:26:06.406 20:32:36 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:26:06.406 20:32:36 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:06.665 [2024-04-24 20:32:36.757124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.665 [2024-04-24 20:32:36.757393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:06.665 [2024-04-24 20:32:36.757551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:06.665 [2024-04-24 20:32:36.757594] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.665 [2024-04-24 20:32:36.757674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.665 [2024-04-24 20:32:36.757691] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:06.665 [2024-04-24 20:32:36.757704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:06.665 [2024-04-24 20:32:36.757716] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.665 [2024-04-24 20:32:36.757741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:06.665 [2024-04-24 20:32:36.757754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:06.665 [2024-04-24 20:32:36.757767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:06.665 [2024-04-24 20:32:36.757779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:06.665 [2024-04-24 20:32:36.757882] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.727 ms, result 0 00:26:06.665 true 00:26:06.665 20:32:36 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:06.924 { 00:26:06.924 "name": "ftl", 00:26:06.924 "properties": [ 00:26:06.924 { 00:26:06.924 "name": "superblock_version", 00:26:06.924 "value": 5, 00:26:06.924 "read-only": true 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "name": "base_device", 00:26:06.924 "bands": [ 00:26:06.924 { 00:26:06.924 "id": 0, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 1, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 2, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 3, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 4, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 5, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 6, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 7, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 8, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 9, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 10, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 11, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 12, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 13, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 14, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 15, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 16, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 17, 00:26:06.924 "state": "FREE", 00:26:06.924 "validity": 0.0 00:26:06.924 } 00:26:06.924 ], 00:26:06.924 "read-only": true 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "name": "cache_device", 00:26:06.924 "type": "bdev", 00:26:06.924 "chunks": [ 00:26:06.924 { 00:26:06.924 "id": 0, 00:26:06.924 "state": "CLOSED", 00:26:06.924 "utilization": 1.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 1, 00:26:06.924 "state": "CLOSED", 00:26:06.924 "utilization": 1.0 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 2, 00:26:06.924 "state": "OPEN", 00:26:06.924 "utilization": 0.001953125 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "id": 3, 00:26:06.924 "state": "OPEN", 00:26:06.924 "utilization": 0.0 00:26:06.924 } 00:26:06.924 ], 00:26:06.924 "read-only": true 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "name": "verbose_mode", 00:26:06.924 "value": true, 00:26:06.924 "unit": "", 00:26:06.924 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:06.924 }, 00:26:06.924 { 00:26:06.924 "name": "prep_upgrade_on_shutdown", 00:26:06.924 "value": true, 00:26:06.924 "unit": "", 00:26:06.924 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:06.924 } 00:26:06.924 ] 00:26:06.924 } 00:26:06.924 20:32:36 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:26:06.924 20:32:36 -- ftl/common.sh@130 -- # [[ -n 83228 ]] 00:26:06.924 20:32:36 -- ftl/common.sh@131 -- # killprocess 83228 00:26:06.924 20:32:36 -- common/autotest_common.sh@936 -- # '[' -z 83228 ']' 00:26:06.924 20:32:36 -- common/autotest_common.sh@940 -- # kill -0 83228 00:26:06.924 20:32:36 -- common/autotest_common.sh@941 -- # uname 00:26:06.924 20:32:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:06.924 20:32:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83228 00:26:06.924 killing process with pid 83228 00:26:06.924 20:32:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:06.924 20:32:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:06.924 20:32:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83228' 00:26:06.924 20:32:37 -- common/autotest_common.sh@955 -- # kill 83228 00:26:06.924 [2024-04-24 20:32:37.011828] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:06.924 20:32:37 -- common/autotest_common.sh@960 -- # wait 83228 00:26:08.399 [2024-04-24 20:32:38.228618] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:26:08.399 [2024-04-24 20:32:38.249437] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.399 [2024-04-24 20:32:38.249493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:08.399 [2024-04-24 20:32:38.249512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:08.399 [2024-04-24 20:32:38.249526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:08.399 [2024-04-24 20:32:38.249555] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:08.399 [2024-04-24 20:32:38.253939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:08.399 [2024-04-24 20:32:38.253976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:08.399 [2024-04-24 20:32:38.253993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.370 ms 00:26:08.399 [2024-04-24 20:32:38.254006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.519 [2024-04-24 20:32:45.426561] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.519 [2024-04-24 20:32:45.426638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:16.519 [2024-04-24 20:32:45.426656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7184.162 ms 00:26:16.519 [2024-04-24 20:32:45.426671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.519 [2024-04-24 20:32:45.427798] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.519 [2024-04-24 20:32:45.427830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:16.519 [2024-04-24 20:32:45.427843] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.108 ms 00:26:16.519 [2024-04-24 20:32:45.427864] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.519 [2024-04-24 20:32:45.428841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.519 [2024-04-24 20:32:45.428871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:26:16.519 [2024-04-24 20:32:45.428884] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.945 ms 00:26:16.519 [2024-04-24 20:32:45.428895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.519 [2024-04-24 20:32:45.444629] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.519 [2024-04-24 20:32:45.444681] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:16.519 [2024-04-24 20:32:45.444696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.687 ms 00:26:16.519 [2024-04-24 20:32:45.444707] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.519 [2024-04-24 20:32:45.454768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.519 [2024-04-24 20:32:45.454813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:16.519 [2024-04-24 20:32:45.454828] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 10.035 ms 00:26:16.519 [2024-04-24 20:32:45.454839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.519 [2024-04-24 20:32:45.454939] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.519 [2024-04-24 20:32:45.454953] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:16.519 [2024-04-24 20:32:45.454979] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:26:16.519 [2024-04-24 20:32:45.454990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.519 [2024-04-24 20:32:45.470793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.519 [2024-04-24 20:32:45.470834] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:16.519 [2024-04-24 20:32:45.470847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.809 ms 00:26:16.519 [2024-04-24 20:32:45.470868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.519 [2024-04-24 20:32:45.486633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.519 [2024-04-24 20:32:45.486675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:16.519 [2024-04-24 20:32:45.486689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.751 ms 00:26:16.519 [2024-04-24 20:32:45.486699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.519 [2024-04-24 20:32:45.502212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.519 [2024-04-24 20:32:45.502253] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:16.519 [2024-04-24 20:32:45.502266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.501 ms 00:26:16.519 [2024-04-24 20:32:45.502276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.519 [2024-04-24 20:32:45.517838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.519 [2024-04-24 20:32:45.517896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:16.519 [2024-04-24 20:32:45.517908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.515 ms 00:26:16.519 [2024-04-24 20:32:45.517918] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.519 [2024-04-24 20:32:45.517969] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:16.519 [2024-04-24 20:32:45.517989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:16.519 [2024-04-24 20:32:45.518003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:16.519 [2024-04-24 20:32:45.518015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:16.519 [2024-04-24 20:32:45.518027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:16.519 [2024-04-24 20:32:45.518223] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:16.519 [2024-04-24 20:32:45.518232] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d6a7871b-4402-48fd-ac10-0a902c381bfb 00:26:16.519 [2024-04-24 20:32:45.518244] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:16.519 [2024-04-24 20:32:45.518254] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:26:16.519 [2024-04-24 20:32:45.518263] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:26:16.519 [2024-04-24 20:32:45.518273] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:26:16.519 [2024-04-24 20:32:45.518284] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:16.519 [2024-04-24 20:32:45.518299] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:16.519 [2024-04-24 20:32:45.518308] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:16.519 [2024-04-24 20:32:45.518317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:16.519 [2024-04-24 20:32:45.518326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:16.520 [2024-04-24 20:32:45.518336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.520 [2024-04-24 20:32:45.518346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:16.520 [2024-04-24 20:32:45.518359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.368 ms 00:26:16.520 [2024-04-24 20:32:45.518369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.538973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.520 [2024-04-24 20:32:45.539017] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:16.520 [2024-04-24 20:32:45.539031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.610 ms 00:26:16.520 [2024-04-24 20:32:45.539048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.539337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:16.520 [2024-04-24 20:32:45.539353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:16.520 [2024-04-24 20:32:45.539364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.238 ms 00:26:16.520 [2024-04-24 20:32:45.539381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.609150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:16.520 [2024-04-24 20:32:45.609206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:16.520 [2024-04-24 20:32:45.609222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:16.520 [2024-04-24 20:32:45.609237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.609288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:16.520 [2024-04-24 20:32:45.609299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:16.520 [2024-04-24 20:32:45.609309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:16.520 [2024-04-24 20:32:45.609329] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.609421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:16.520 [2024-04-24 20:32:45.609434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:16.520 [2024-04-24 20:32:45.609444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:16.520 [2024-04-24 20:32:45.609455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.609479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:16.520 [2024-04-24 20:32:45.609489] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:16.520 [2024-04-24 20:32:45.609499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:16.520 [2024-04-24 20:32:45.609510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.737655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:16.520 [2024-04-24 20:32:45.737845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:16.520 [2024-04-24 20:32:45.737980] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:16.520 [2024-04-24 20:32:45.738071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.787979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:16.520 [2024-04-24 20:32:45.788190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:16.520 [2024-04-24 20:32:45.788267] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:16.520 [2024-04-24 20:32:45.788306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.788423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:16.520 [2024-04-24 20:32:45.788498] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:16.520 [2024-04-24 20:32:45.788513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:16.520 [2024-04-24 20:32:45.788524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.788576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:16.520 [2024-04-24 20:32:45.788596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:16.520 [2024-04-24 20:32:45.788607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:16.520 [2024-04-24 20:32:45.788618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.788736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:16.520 [2024-04-24 20:32:45.788750] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:16.520 [2024-04-24 20:32:45.788761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:16.520 [2024-04-24 20:32:45.788772] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.788810] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:16.520 [2024-04-24 20:32:45.788827] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:16.520 [2024-04-24 20:32:45.788839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:16.520 [2024-04-24 20:32:45.788849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.788910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:16.520 [2024-04-24 20:32:45.788922] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:16.520 [2024-04-24 20:32:45.788933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:16.520 [2024-04-24 20:32:45.788943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.789014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:16.520 [2024-04-24 20:32:45.789041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:16.520 [2024-04-24 20:32:45.789052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:16.520 [2024-04-24 20:32:45.789063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:16.520 [2024-04-24 20:32:45.789196] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7551.972 ms, result 0 00:26:20.750 20:32:50 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:20.750 20:32:50 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:26:20.750 20:32:50 -- ftl/common.sh@81 -- # local base_bdev= 00:26:20.750 20:32:50 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:20.750 20:32:50 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:20.750 20:32:50 -- ftl/common.sh@89 -- # spdk_tgt_pid=83855 00:26:20.750 20:32:50 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:20.750 20:32:50 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:20.750 20:32:50 -- ftl/common.sh@91 -- # waitforlisten 83855 00:26:20.750 20:32:50 -- common/autotest_common.sh@817 -- # '[' -z 83855 ']' 00:26:20.750 20:32:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.750 20:32:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:20.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.750 20:32:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.750 20:32:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:20.750 20:32:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.750 [2024-04-24 20:32:50.677297] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:26:20.750 [2024-04-24 20:32:50.677430] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83855 ] 00:26:20.750 [2024-04-24 20:32:50.854589] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.009 [2024-04-24 20:32:51.084890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.949 [2024-04-24 20:32:52.102686] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:21.949 [2024-04-24 20:32:52.102752] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:22.209 [2024-04-24 20:32:52.242692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.209 [2024-04-24 20:32:52.242748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:22.209 [2024-04-24 20:32:52.242765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:22.209 [2024-04-24 20:32:52.242776] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.209 [2024-04-24 20:32:52.242824] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.209 [2024-04-24 20:32:52.242836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:22.209 [2024-04-24 20:32:52.242847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:26:22.209 [2024-04-24 20:32:52.242872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.209 [2024-04-24 20:32:52.242906] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:22.209 [2024-04-24 20:32:52.244081] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:22.209 [2024-04-24 20:32:52.244115] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.209 [2024-04-24 20:32:52.244126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:22.209 [2024-04-24 20:32:52.244141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.225 ms 00:26:22.209 [2024-04-24 20:32:52.244151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.209 [2024-04-24 20:32:52.245552] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:22.209 [2024-04-24 20:32:52.265852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.209 [2024-04-24 20:32:52.265904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:22.209 [2024-04-24 20:32:52.265918] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.334 ms 00:26:22.209 [2024-04-24 20:32:52.265928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.209 [2024-04-24 20:32:52.265989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.209 [2024-04-24 20:32:52.266001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:22.209 [2024-04-24 20:32:52.266012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:22.209 [2024-04-24 20:32:52.266025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.209 [2024-04-24 20:32:52.272656] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.209 [2024-04-24 20:32:52.272684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:22.209 [2024-04-24 20:32:52.272696] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.569 ms 00:26:22.209 [2024-04-24 20:32:52.272706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.209 [2024-04-24 20:32:52.272747] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.209 [2024-04-24 20:32:52.272761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:22.210 [2024-04-24 20:32:52.272775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:26:22.210 [2024-04-24 20:32:52.272785] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.210 [2024-04-24 20:32:52.272828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.210 [2024-04-24 20:32:52.272840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:22.210 [2024-04-24 20:32:52.272851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:26:22.210 [2024-04-24 20:32:52.272895] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.210 [2024-04-24 20:32:52.272920] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:22.210 [2024-04-24 20:32:52.278834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.210 [2024-04-24 20:32:52.278878] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:22.210 [2024-04-24 20:32:52.278891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.930 ms 00:26:22.210 [2024-04-24 20:32:52.278901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.210 [2024-04-24 20:32:52.278934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.210 [2024-04-24 20:32:52.278946] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:22.210 [2024-04-24 20:32:52.278964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:22.210 [2024-04-24 20:32:52.278982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.210 [2024-04-24 20:32:52.279032] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:22.210 [2024-04-24 20:32:52.279056] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:26:22.210 [2024-04-24 20:32:52.279096] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:22.210 [2024-04-24 20:32:52.279115] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:26:22.210 [2024-04-24 20:32:52.279183] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:22.210 [2024-04-24 20:32:52.279199] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:22.210 [2024-04-24 20:32:52.279212] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:22.210 [2024-04-24 20:32:52.279224] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:22.210 [2024-04-24 20:32:52.279237] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:22.210 [2024-04-24 20:32:52.279249] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:22.210 [2024-04-24 20:32:52.279258] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:22.210 [2024-04-24 20:32:52.279268] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:22.210 [2024-04-24 20:32:52.279278] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:22.210 [2024-04-24 20:32:52.279289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.210 [2024-04-24 20:32:52.279299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:22.210 [2024-04-24 20:32:52.279309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.259 ms 00:26:22.210 [2024-04-24 20:32:52.279322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.210 [2024-04-24 20:32:52.279382] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.210 [2024-04-24 20:32:52.279393] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:22.210 [2024-04-24 20:32:52.279403] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:26:22.210 [2024-04-24 20:32:52.279412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.210 [2024-04-24 20:32:52.279479] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:22.210 [2024-04-24 20:32:52.279491] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:22.210 [2024-04-24 20:32:52.279501] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:22.210 [2024-04-24 20:32:52.279511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.210 [2024-04-24 20:32:52.279524] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:22.210 [2024-04-24 20:32:52.279533] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:22.210 [2024-04-24 20:32:52.279543] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:22.210 [2024-04-24 20:32:52.279552] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:22.210 [2024-04-24 20:32:52.279562] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:22.210 [2024-04-24 20:32:52.279572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.210 [2024-04-24 20:32:52.279581] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:22.210 [2024-04-24 20:32:52.279590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:22.210 [2024-04-24 20:32:52.279600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.210 [2024-04-24 20:32:52.279609] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:22.210 [2024-04-24 20:32:52.279618] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:22.210 [2024-04-24 20:32:52.279627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.210 [2024-04-24 20:32:52.279636] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:22.210 [2024-04-24 20:32:52.279645] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:22.210 [2024-04-24 20:32:52.279653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.210 [2024-04-24 20:32:52.279662] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:22.210 [2024-04-24 20:32:52.279671] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:22.210 [2024-04-24 20:32:52.279681] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:22.210 [2024-04-24 20:32:52.279690] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:22.210 [2024-04-24 20:32:52.279699] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:22.210 [2024-04-24 20:32:52.279710] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:22.210 [2024-04-24 20:32:52.279719] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:22.210 [2024-04-24 20:32:52.279728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:22.210 [2024-04-24 20:32:52.279736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:22.210 [2024-04-24 20:32:52.279745] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:22.210 [2024-04-24 20:32:52.279754] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:22.210 [2024-04-24 20:32:52.279763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:22.210 [2024-04-24 20:32:52.279772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:22.210 [2024-04-24 20:32:52.279781] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:22.210 [2024-04-24 20:32:52.279790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:22.210 [2024-04-24 20:32:52.279799] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:22.210 [2024-04-24 20:32:52.279808] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:22.210 [2024-04-24 20:32:52.279817] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.210 [2024-04-24 20:32:52.279826] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:22.210 [2024-04-24 20:32:52.279835] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:22.210 [2024-04-24 20:32:52.279844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.210 [2024-04-24 20:32:52.279852] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:22.210 [2024-04-24 20:32:52.279878] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:22.210 [2024-04-24 20:32:52.279888] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:22.210 [2024-04-24 20:32:52.279898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:22.210 [2024-04-24 20:32:52.279908] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:22.210 [2024-04-24 20:32:52.279917] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:22.210 [2024-04-24 20:32:52.279926] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:22.210 [2024-04-24 20:32:52.279935] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:22.210 [2024-04-24 20:32:52.279944] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:22.210 [2024-04-24 20:32:52.279963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:22.210 [2024-04-24 20:32:52.279974] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:22.210 [2024-04-24 20:32:52.279987] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:22.210 [2024-04-24 20:32:52.279998] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:22.210 [2024-04-24 20:32:52.280008] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:22.210 [2024-04-24 20:32:52.280018] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:22.210 [2024-04-24 20:32:52.280029] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:22.211 [2024-04-24 20:32:52.280039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:22.211 [2024-04-24 20:32:52.280049] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:22.211 [2024-04-24 20:32:52.280059] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:22.211 [2024-04-24 20:32:52.280069] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:22.211 [2024-04-24 20:32:52.280080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:22.211 [2024-04-24 20:32:52.280090] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:22.211 [2024-04-24 20:32:52.280100] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:22.211 [2024-04-24 20:32:52.280110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:22.211 [2024-04-24 20:32:52.280121] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:22.211 [2024-04-24 20:32:52.280130] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:22.211 [2024-04-24 20:32:52.280141] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:22.211 [2024-04-24 20:32:52.280152] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:22.211 [2024-04-24 20:32:52.280162] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:22.211 [2024-04-24 20:32:52.280172] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:22.211 [2024-04-24 20:32:52.280182] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:22.211 [2024-04-24 20:32:52.280193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.211 [2024-04-24 20:32:52.280202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:22.211 [2024-04-24 20:32:52.280216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.749 ms 00:26:22.211 [2024-04-24 20:32:52.280227] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.211 [2024-04-24 20:32:52.305895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.211 [2024-04-24 20:32:52.305927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:22.211 [2024-04-24 20:32:52.305944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.660 ms 00:26:22.211 [2024-04-24 20:32:52.305954] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.211 [2024-04-24 20:32:52.305999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.211 [2024-04-24 20:32:52.306009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:22.211 [2024-04-24 20:32:52.306019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:22.211 [2024-04-24 20:32:52.306030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.211 [2024-04-24 20:32:52.362333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.211 [2024-04-24 20:32:52.362367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:22.211 [2024-04-24 20:32:52.362381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 56.344 ms 00:26:22.211 [2024-04-24 20:32:52.362392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.211 [2024-04-24 20:32:52.362435] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.211 [2024-04-24 20:32:52.362445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:22.211 [2024-04-24 20:32:52.362456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:22.211 [2024-04-24 20:32:52.362466] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.211 [2024-04-24 20:32:52.362942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.211 [2024-04-24 20:32:52.362956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:22.211 [2024-04-24 20:32:52.362967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.424 ms 00:26:22.211 [2024-04-24 20:32:52.362985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.211 [2024-04-24 20:32:52.363023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.211 [2024-04-24 20:32:52.363038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:22.211 [2024-04-24 20:32:52.363049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:26:22.211 [2024-04-24 20:32:52.363059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.211 [2024-04-24 20:32:52.388186] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.211 [2024-04-24 20:32:52.388222] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:22.211 [2024-04-24 20:32:52.388236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.144 ms 00:26:22.211 [2024-04-24 20:32:52.388257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.211 [2024-04-24 20:32:52.408843] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:26:22.211 [2024-04-24 20:32:52.408897] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:22.211 [2024-04-24 20:32:52.408912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.211 [2024-04-24 20:32:52.408923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:26:22.211 [2024-04-24 20:32:52.408935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.583 ms 00:26:22.211 [2024-04-24 20:32:52.408944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.211 [2024-04-24 20:32:52.429020] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.211 [2024-04-24 20:32:52.429073] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:26:22.211 [2024-04-24 20:32:52.429087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 20.065 ms 00:26:22.211 [2024-04-24 20:32:52.429098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.472 [2024-04-24 20:32:52.447153] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.472 [2024-04-24 20:32:52.447189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:26:22.472 [2024-04-24 20:32:52.447201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.038 ms 00:26:22.472 [2024-04-24 20:32:52.447211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.472 [2024-04-24 20:32:52.465045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.472 [2024-04-24 20:32:52.465079] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:26:22.472 [2024-04-24 20:32:52.465092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.823 ms 00:26:22.472 [2024-04-24 20:32:52.465102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.472 [2024-04-24 20:32:52.465560] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.472 [2024-04-24 20:32:52.465576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:22.472 [2024-04-24 20:32:52.465590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.360 ms 00:26:22.472 [2024-04-24 20:32:52.465600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.472 [2024-04-24 20:32:52.559156] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.472 [2024-04-24 20:32:52.559214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:22.472 [2024-04-24 20:32:52.559236] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 93.685 ms 00:26:22.472 [2024-04-24 20:32:52.559247] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.472 [2024-04-24 20:32:52.571803] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:22.472 [2024-04-24 20:32:52.572637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.472 [2024-04-24 20:32:52.572668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:22.472 [2024-04-24 20:32:52.572681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.358 ms 00:26:22.472 [2024-04-24 20:32:52.572692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.472 [2024-04-24 20:32:52.572771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.472 [2024-04-24 20:32:52.572784] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:26:22.472 [2024-04-24 20:32:52.572799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:22.472 [2024-04-24 20:32:52.572809] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.472 [2024-04-24 20:32:52.572884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.472 [2024-04-24 20:32:52.572897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:22.472 [2024-04-24 20:32:52.572907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:26:22.472 [2024-04-24 20:32:52.572917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.472 [2024-04-24 20:32:52.575005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.472 [2024-04-24 20:32:52.575035] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:22.472 [2024-04-24 20:32:52.575046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.054 ms 00:26:22.472 [2024-04-24 20:32:52.575056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.472 [2024-04-24 20:32:52.575092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.472 [2024-04-24 20:32:52.575103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:22.472 [2024-04-24 20:32:52.575114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:26:22.472 [2024-04-24 20:32:52.575124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.472 [2024-04-24 20:32:52.575161] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:22.472 [2024-04-24 20:32:52.575173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.472 [2024-04-24 20:32:52.575183] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:22.472 [2024-04-24 20:32:52.575193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:22.472 [2024-04-24 20:32:52.575203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.472 [2024-04-24 20:32:52.613876] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.472 [2024-04-24 20:32:52.613914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:22.473 [2024-04-24 20:32:52.613929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 38.710 ms 00:26:22.473 [2024-04-24 20:32:52.613940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.473 [2024-04-24 20:32:52.614013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:22.473 [2024-04-24 20:32:52.614024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:22.473 [2024-04-24 20:32:52.614036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:26:22.473 [2024-04-24 20:32:52.614052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:22.473 [2024-04-24 20:32:52.615119] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 372.553 ms, result 0 00:26:22.473 [2024-04-24 20:32:52.630182] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.473 [2024-04-24 20:32:52.646179] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:22.473 [2024-04-24 20:32:52.656100] nvmf_rpc.c: 606:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:22.473 [2024-04-24 20:32:52.656339] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:23.041 20:32:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:23.041 20:32:53 -- common/autotest_common.sh@850 -- # return 0 00:26:23.041 20:32:53 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:23.041 20:32:53 -- ftl/common.sh@95 -- # return 0 00:26:23.041 20:32:53 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:26:23.300 [2024-04-24 20:32:53.295985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.300 [2024-04-24 20:32:53.296047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:26:23.300 [2024-04-24 20:32:53.296066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:23.300 [2024-04-24 20:32:53.296077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.300 [2024-04-24 20:32:53.296124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.300 [2024-04-24 20:32:53.296138] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:26:23.300 [2024-04-24 20:32:53.296149] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:23.300 [2024-04-24 20:32:53.296159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.300 [2024-04-24 20:32:53.296180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:23.300 [2024-04-24 20:32:53.296191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:26:23.300 [2024-04-24 20:32:53.296201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:23.300 [2024-04-24 20:32:53.296211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:23.300 [2024-04-24 20:32:53.296275] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.282 ms, result 0 00:26:23.300 true 00:26:23.300 20:32:53 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:23.300 { 00:26:23.300 "name": "ftl", 00:26:23.300 "properties": [ 00:26:23.300 { 00:26:23.300 "name": "superblock_version", 00:26:23.300 "value": 5, 00:26:23.300 "read-only": true 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "name": "base_device", 00:26:23.300 "bands": [ 00:26:23.300 { 00:26:23.300 "id": 0, 00:26:23.300 "state": "CLOSED", 00:26:23.300 "validity": 1.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 1, 00:26:23.300 "state": "CLOSED", 00:26:23.300 "validity": 1.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 2, 00:26:23.300 "state": "CLOSED", 00:26:23.300 "validity": 0.007843137254901933 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 3, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 4, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 5, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 6, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 7, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 8, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 9, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 10, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 11, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 12, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 13, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 14, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 15, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 16, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "id": 17, 00:26:23.300 "state": "FREE", 00:26:23.300 "validity": 0.0 00:26:23.300 } 00:26:23.300 ], 00:26:23.300 "read-only": true 00:26:23.300 }, 00:26:23.300 { 00:26:23.300 "name": "cache_device", 00:26:23.300 "type": "bdev", 00:26:23.300 "chunks": [ 00:26:23.300 { 00:26:23.300 "id": 0, 00:26:23.300 "state": "OPEN", 00:26:23.300 "utilization": 0.0 00:26:23.300 }, 00:26:23.300 { 00:26:23.301 "id": 1, 00:26:23.301 "state": "OPEN", 00:26:23.301 "utilization": 0.0 00:26:23.301 }, 00:26:23.301 { 00:26:23.301 "id": 2, 00:26:23.301 "state": "FREE", 00:26:23.301 "utilization": 0.0 00:26:23.301 }, 00:26:23.301 { 00:26:23.301 "id": 3, 00:26:23.301 "state": "FREE", 00:26:23.301 "utilization": 0.0 00:26:23.301 } 00:26:23.301 ], 00:26:23.301 "read-only": true 00:26:23.301 }, 00:26:23.301 { 00:26:23.301 "name": "verbose_mode", 00:26:23.301 "value": true, 00:26:23.301 "unit": "", 00:26:23.301 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:26:23.301 }, 00:26:23.301 { 00:26:23.301 "name": "prep_upgrade_on_shutdown", 00:26:23.301 "value": false, 00:26:23.301 "unit": "", 00:26:23.301 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:26:23.301 } 00:26:23.301 ] 00:26:23.301 } 00:26:23.301 20:32:53 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:26:23.301 20:32:53 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:26:23.301 20:32:53 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:23.560 20:32:53 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:26:23.560 20:32:53 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:26:23.560 20:32:53 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:26:23.560 20:32:53 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:26:23.560 20:32:53 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:26:23.819 Validate MD5 checksum, iteration 1 00:26:23.819 20:32:53 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:26:23.819 20:32:53 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:26:23.819 20:32:53 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:26:23.819 20:32:53 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:23.819 20:32:53 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:23.819 20:32:53 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:23.819 20:32:53 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:23.819 20:32:53 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:23.819 20:32:53 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:23.819 20:32:53 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:23.819 20:32:53 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:23.819 20:32:53 -- ftl/common.sh@154 -- # return 0 00:26:23.819 20:32:53 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:23.819 [2024-04-24 20:32:53.977439] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:26:23.819 [2024-04-24 20:32:53.977726] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83901 ] 00:26:24.077 [2024-04-24 20:32:54.147153] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.336 [2024-04-24 20:32:54.426792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.346  Copying: 623/1024 [MB] (623 MBps) Copying: 1024/1024 [MB] (average 619 MBps) 00:26:29.346 00:26:29.346 20:32:59 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:29.346 20:32:59 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:31.274 20:33:01 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:31.274 Validate MD5 checksum, iteration 2 00:26:31.274 20:33:01 -- ftl/upgrade_shutdown.sh@103 -- # sum=d262c4861a96b007ac72d67b9cb9f55a 00:26:31.274 20:33:01 -- ftl/upgrade_shutdown.sh@105 -- # [[ d262c4861a96b007ac72d67b9cb9f55a != \d\2\6\2\c\4\8\6\1\a\9\6\b\0\0\7\a\c\7\2\d\6\7\b\9\c\b\9\f\5\5\a ]] 00:26:31.274 20:33:01 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:31.274 20:33:01 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:31.274 20:33:01 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:31.274 20:33:01 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:31.274 20:33:01 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:31.274 20:33:01 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:31.274 20:33:01 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:31.274 20:33:01 -- ftl/common.sh@154 -- # return 0 00:26:31.274 20:33:01 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:31.274 [2024-04-24 20:33:01.239546] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:26:31.274 [2024-04-24 20:33:01.241795] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83980 ] 00:26:31.274 [2024-04-24 20:33:01.430580] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.542 [2024-04-24 20:33:01.723246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.550  Copying: 643/1024 [MB] (643 MBps) Copying: 1024/1024 [MB] (average 641 MBps) 00:26:36.550 00:26:36.550 20:33:06 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:36.550 20:33:06 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:38.453 20:33:08 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:38.453 20:33:08 -- ftl/upgrade_shutdown.sh@103 -- # sum=c7f123b2f913ad811dabd6a0b92762e6 00:26:38.453 20:33:08 -- ftl/upgrade_shutdown.sh@105 -- # [[ c7f123b2f913ad811dabd6a0b92762e6 != \c\7\f\1\2\3\b\2\f\9\1\3\a\d\8\1\1\d\a\b\d\6\a\0\b\9\2\7\6\2\e\6 ]] 00:26:38.453 20:33:08 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:38.453 20:33:08 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:38.453 20:33:08 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:26:38.453 20:33:08 -- ftl/common.sh@137 -- # [[ -n 83855 ]] 00:26:38.453 20:33:08 -- ftl/common.sh@138 -- # kill -9 83855 00:26:38.453 20:33:08 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:26:38.453 20:33:08 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:26:38.453 20:33:08 -- ftl/common.sh@81 -- # local base_bdev= 00:26:38.453 20:33:08 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:38.453 20:33:08 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:38.453 20:33:08 -- ftl/common.sh@89 -- # spdk_tgt_pid=84059 00:26:38.453 20:33:08 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:38.453 20:33:08 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:38.453 20:33:08 -- ftl/common.sh@91 -- # waitforlisten 84059 00:26:38.453 20:33:08 -- common/autotest_common.sh@817 -- # '[' -z 84059 ']' 00:26:38.453 20:33:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:38.453 20:33:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:38.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:38.453 20:33:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:38.453 20:33:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:38.453 20:33:08 -- common/autotest_common.sh@10 -- # set +x 00:26:38.453 [2024-04-24 20:33:08.486424] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:26:38.454 [2024-04-24 20:33:08.486539] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84059 ] 00:26:38.454 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 816: 83855 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:26:38.454 [2024-04-24 20:33:08.655834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.712 [2024-04-24 20:33:08.892695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.089 [2024-04-24 20:33:09.916230] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:40.089 [2024-04-24 20:33:09.916296] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:26:40.089 [2024-04-24 20:33:10.059745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.089 [2024-04-24 20:33:10.059803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:40.089 [2024-04-24 20:33:10.059819] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:26:40.089 [2024-04-24 20:33:10.059830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.089 [2024-04-24 20:33:10.059901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.089 [2024-04-24 20:33:10.059916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:40.089 [2024-04-24 20:33:10.059927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:26:40.089 [2024-04-24 20:33:10.059938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.089 [2024-04-24 20:33:10.059973] mngt/ftl_mngt_bdev.c: 194:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:40.089 [2024-04-24 20:33:10.061167] mngt/ftl_mngt_bdev.c: 235:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:40.089 [2024-04-24 20:33:10.061196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.089 [2024-04-24 20:33:10.061207] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:40.089 [2024-04-24 20:33:10.061221] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.240 ms 00:26:40.089 [2024-04-24 20:33:10.061231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.089 [2024-04-24 20:33:10.061557] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:26:40.089 [2024-04-24 20:33:10.088738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.089 [2024-04-24 20:33:10.088840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:26:40.089 [2024-04-24 20:33:10.088920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.225 ms 00:26:40.089 [2024-04-24 20:33:10.088955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.089 [2024-04-24 20:33:10.103691] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.089 [2024-04-24 20:33:10.103851] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:26:40.089 [2024-04-24 20:33:10.103987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:26:40.089 [2024-04-24 20:33:10.104025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.089 [2024-04-24 20:33:10.104570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.089 [2024-04-24 20:33:10.104694] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:40.089 [2024-04-24 20:33:10.104776] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.417 ms 00:26:40.089 [2024-04-24 20:33:10.104810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.089 [2024-04-24 20:33:10.104895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.089 [2024-04-24 20:33:10.104972] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:40.089 [2024-04-24 20:33:10.105059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:26:40.089 [2024-04-24 20:33:10.105089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.089 [2024-04-24 20:33:10.105146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.089 [2024-04-24 20:33:10.105179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:40.089 [2024-04-24 20:33:10.105212] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:40.089 [2024-04-24 20:33:10.105241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.089 [2024-04-24 20:33:10.105361] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:40.089 [2024-04-24 20:33:10.110660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.089 [2024-04-24 20:33:10.110781] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:40.089 [2024-04-24 20:33:10.110865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 5.317 ms 00:26:40.089 [2024-04-24 20:33:10.110902] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.089 [2024-04-24 20:33:10.110957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.089 [2024-04-24 20:33:10.110998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:40.089 [2024-04-24 20:33:10.111047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:40.089 [2024-04-24 20:33:10.111077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.089 [2024-04-24 20:33:10.111138] ftl_layout.c: 602:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:26:40.089 [2024-04-24 20:33:10.111188] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:26:40.089 [2024-04-24 20:33:10.111333] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:26:40.089 [2024-04-24 20:33:10.111397] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:26:40.089 [2024-04-24 20:33:10.111506] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:26:40.089 [2024-04-24 20:33:10.111630] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:40.089 [2024-04-24 20:33:10.111731] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:26:40.089 [2024-04-24 20:33:10.111790] ftl_layout.c: 673:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:40.089 [2024-04-24 20:33:10.111842] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:40.089 [2024-04-24 20:33:10.111963] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:40.089 [2024-04-24 20:33:10.111975] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:40.089 [2024-04-24 20:33:10.111985] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:26:40.089 [2024-04-24 20:33:10.111995] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:26:40.089 [2024-04-24 20:33:10.112007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.089 [2024-04-24 20:33:10.112018] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:40.089 [2024-04-24 20:33:10.112028] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.872 ms 00:26:40.089 [2024-04-24 20:33:10.112039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.089 [2024-04-24 20:33:10.112121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.089 [2024-04-24 20:33:10.112133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:40.089 [2024-04-24 20:33:10.112143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:26:40.089 [2024-04-24 20:33:10.112156] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.089 [2024-04-24 20:33:10.112224] ftl_layout.c: 756:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:40.089 [2024-04-24 20:33:10.112236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:40.089 [2024-04-24 20:33:10.112247] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:40.089 [2024-04-24 20:33:10.112258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:40.089 [2024-04-24 20:33:10.112268] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:40.089 [2024-04-24 20:33:10.112278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:40.089 [2024-04-24 20:33:10.112287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:40.089 [2024-04-24 20:33:10.112296] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:40.089 [2024-04-24 20:33:10.112306] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:40.089 [2024-04-24 20:33:10.112315] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:40.089 [2024-04-24 20:33:10.112324] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:40.089 [2024-04-24 20:33:10.112333] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:40.089 [2024-04-24 20:33:10.112341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:40.089 [2024-04-24 20:33:10.112350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:40.089 [2024-04-24 20:33:10.112360] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:26:40.089 [2024-04-24 20:33:10.112369] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:40.089 [2024-04-24 20:33:10.112377] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:40.089 [2024-04-24 20:33:10.112386] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:26:40.089 [2024-04-24 20:33:10.112397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:40.089 [2024-04-24 20:33:10.112406] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:26:40.089 [2024-04-24 20:33:10.112415] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:26:40.089 [2024-04-24 20:33:10.112424] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:26:40.089 [2024-04-24 20:33:10.112432] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:40.089 [2024-04-24 20:33:10.112441] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:40.089 [2024-04-24 20:33:10.112450] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:40.089 [2024-04-24 20:33:10.112459] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:40.089 [2024-04-24 20:33:10.112469] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:26:40.089 [2024-04-24 20:33:10.112477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:40.090 [2024-04-24 20:33:10.112486] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:40.090 [2024-04-24 20:33:10.112495] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:40.090 [2024-04-24 20:33:10.112504] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:40.090 [2024-04-24 20:33:10.112513] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:40.090 [2024-04-24 20:33:10.112521] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:26:40.090 [2024-04-24 20:33:10.112530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:26:40.090 [2024-04-24 20:33:10.112539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:40.090 [2024-04-24 20:33:10.112548] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:40.090 [2024-04-24 20:33:10.112557] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:40.090 [2024-04-24 20:33:10.112566] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:40.090 [2024-04-24 20:33:10.112575] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:26:40.090 [2024-04-24 20:33:10.112584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:40.090 [2024-04-24 20:33:10.112592] ftl_layout.c: 763:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:40.090 [2024-04-24 20:33:10.112601] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:40.090 [2024-04-24 20:33:10.112611] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:40.090 [2024-04-24 20:33:10.112620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:40.090 [2024-04-24 20:33:10.112633] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:40.090 [2024-04-24 20:33:10.112643] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:40.090 [2024-04-24 20:33:10.112652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:40.090 [2024-04-24 20:33:10.112671] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:40.090 [2024-04-24 20:33:10.112680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:40.090 [2024-04-24 20:33:10.112689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:40.090 [2024-04-24 20:33:10.112701] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:40.090 [2024-04-24 20:33:10.112713] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:40.090 [2024-04-24 20:33:10.112725] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:40.090 [2024-04-24 20:33:10.112735] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:26:40.090 [2024-04-24 20:33:10.112746] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:26:40.090 [2024-04-24 20:33:10.112756] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:26:40.090 [2024-04-24 20:33:10.112766] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:26:40.090 [2024-04-24 20:33:10.112776] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:26:40.090 [2024-04-24 20:33:10.112786] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:26:40.090 [2024-04-24 20:33:10.112797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:26:40.090 [2024-04-24 20:33:10.112808] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:26:40.090 [2024-04-24 20:33:10.112819] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:26:40.090 [2024-04-24 20:33:10.112829] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:26:40.090 [2024-04-24 20:33:10.112839] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:26:40.090 [2024-04-24 20:33:10.112849] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:26:40.090 [2024-04-24 20:33:10.112859] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:40.090 [2024-04-24 20:33:10.112881] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:40.090 [2024-04-24 20:33:10.112892] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:40.090 [2024-04-24 20:33:10.112902] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:40.090 [2024-04-24 20:33:10.112912] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:40.090 [2024-04-24 20:33:10.112922] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:40.090 [2024-04-24 20:33:10.112933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.090 [2024-04-24 20:33:10.112943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:40.090 [2024-04-24 20:33:10.112952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.745 ms 00:26:40.090 [2024-04-24 20:33:10.112962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.090 [2024-04-24 20:33:10.136262] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.090 [2024-04-24 20:33:10.136292] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:40.090 [2024-04-24 20:33:10.136306] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.287 ms 00:26:40.090 [2024-04-24 20:33:10.136316] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.090 [2024-04-24 20:33:10.136354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.090 [2024-04-24 20:33:10.136365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:40.090 [2024-04-24 20:33:10.136379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:26:40.090 [2024-04-24 20:33:10.136388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.090 [2024-04-24 20:33:10.187849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.090 [2024-04-24 20:33:10.187991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:40.090 [2024-04-24 20:33:10.188115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 51.486 ms 00:26:40.090 [2024-04-24 20:33:10.188151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.090 [2024-04-24 20:33:10.188215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.090 [2024-04-24 20:33:10.188247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:40.090 [2024-04-24 20:33:10.188322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:40.090 [2024-04-24 20:33:10.188356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.090 [2024-04-24 20:33:10.188503] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.090 [2024-04-24 20:33:10.188593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:40.090 [2024-04-24 20:33:10.188630] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:26:40.090 [2024-04-24 20:33:10.188659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.090 [2024-04-24 20:33:10.188762] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.090 [2024-04-24 20:33:10.188799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:40.090 [2024-04-24 20:33:10.188829] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:26:40.090 [2024-04-24 20:33:10.189010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.090 [2024-04-24 20:33:10.211921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.090 [2024-04-24 20:33:10.212059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:40.090 [2024-04-24 20:33:10.212130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.892 ms 00:26:40.090 [2024-04-24 20:33:10.212165] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.090 [2024-04-24 20:33:10.212308] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.090 [2024-04-24 20:33:10.212415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:26:40.090 [2024-04-24 20:33:10.212486] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:40.090 [2024-04-24 20:33:10.212516] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.090 [2024-04-24 20:33:10.238429] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.090 [2024-04-24 20:33:10.238565] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:26:40.090 [2024-04-24 20:33:10.238636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.914 ms 00:26:40.090 [2024-04-24 20:33:10.238671] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.090 [2024-04-24 20:33:10.253823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.090 [2024-04-24 20:33:10.253968] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:40.090 [2024-04-24 20:33:10.254045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.369 ms 00:26:40.090 [2024-04-24 20:33:10.254086] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.349 [2024-04-24 20:33:10.343209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.349 [2024-04-24 20:33:10.343427] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:26:40.349 [2024-04-24 20:33:10.343562] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 89.176 ms 00:26:40.349 [2024-04-24 20:33:10.343600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.349 [2024-04-24 20:33:10.343725] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:26:40.349 [2024-04-24 20:33:10.343873] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:26:40.349 [2024-04-24 20:33:10.343957] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:26:40.349 [2024-04-24 20:33:10.344076] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:26:40.349 [2024-04-24 20:33:10.344128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.349 [2024-04-24 20:33:10.344158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:26:40.349 [2024-04-24 20:33:10.344243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.450 ms 00:26:40.349 [2024-04-24 20:33:10.344318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.349 [2024-04-24 20:33:10.344430] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:26:40.349 [2024-04-24 20:33:10.344528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.349 [2024-04-24 20:33:10.344561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:26:40.349 [2024-04-24 20:33:10.344682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.098 ms 00:26:40.349 [2024-04-24 20:33:10.344739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.349 [2024-04-24 20:33:10.368782] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.349 [2024-04-24 20:33:10.368930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:26:40.349 [2024-04-24 20:33:10.368958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.981 ms 00:26:40.349 [2024-04-24 20:33:10.368969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.349 [2024-04-24 20:33:10.381971] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.349 [2024-04-24 20:33:10.382006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:26:40.349 [2024-04-24 20:33:10.382019] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:26:40.349 [2024-04-24 20:33:10.382029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.349 [2024-04-24 20:33:10.382086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:40.349 [2024-04-24 20:33:10.382098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:26:40.349 [2024-04-24 20:33:10.382109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:26:40.349 [2024-04-24 20:33:10.382123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:40.349 [2024-04-24 20:33:10.382315] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:26:40.917 [2024-04-24 20:33:10.923265] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:26:40.917 [2024-04-24 20:33:10.923421] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:26:41.486 [2024-04-24 20:33:11.459766] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:26:41.486 [2024-04-24 20:33:11.459871] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:41.486 [2024-04-24 20:33:11.459889] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:26:41.486 [2024-04-24 20:33:11.459903] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.486 [2024-04-24 20:33:11.459914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:26:41.486 [2024-04-24 20:33:11.459929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1079.509 ms 00:26:41.486 [2024-04-24 20:33:11.459940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.486 [2024-04-24 20:33:11.459975] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.486 [2024-04-24 20:33:11.459986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:26:41.486 [2024-04-24 20:33:11.459996] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:26:41.486 [2024-04-24 20:33:11.460006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.486 [2024-04-24 20:33:11.472608] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:41.486 [2024-04-24 20:33:11.472776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.486 [2024-04-24 20:33:11.472790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:41.486 [2024-04-24 20:33:11.472802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.767 ms 00:26:41.486 [2024-04-24 20:33:11.472812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.486 [2024-04-24 20:33:11.473409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.486 [2024-04-24 20:33:11.473429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:26:41.486 [2024-04-24 20:33:11.473440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.507 ms 00:26:41.486 [2024-04-24 20:33:11.473450] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.486 [2024-04-24 20:33:11.475526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.486 [2024-04-24 20:33:11.475552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:26:41.486 [2024-04-24 20:33:11.475564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.056 ms 00:26:41.486 [2024-04-24 20:33:11.475573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.486 [2024-04-24 20:33:11.514248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.487 [2024-04-24 20:33:11.514289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:26:41.487 [2024-04-24 20:33:11.514304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 38.710 ms 00:26:41.487 [2024-04-24 20:33:11.514314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.487 [2024-04-24 20:33:11.514439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.487 [2024-04-24 20:33:11.514456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:41.487 [2024-04-24 20:33:11.514467] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:26:41.487 [2024-04-24 20:33:11.514477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.487 [2024-04-24 20:33:11.516579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.487 [2024-04-24 20:33:11.516607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:26:41.487 [2024-04-24 20:33:11.516619] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.086 ms 00:26:41.487 [2024-04-24 20:33:11.516628] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.487 [2024-04-24 20:33:11.516661] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.487 [2024-04-24 20:33:11.516672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:41.487 [2024-04-24 20:33:11.516683] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:26:41.487 [2024-04-24 20:33:11.516693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.487 [2024-04-24 20:33:11.516726] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:26:41.487 [2024-04-24 20:33:11.516738] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.487 [2024-04-24 20:33:11.516747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:26:41.487 [2024-04-24 20:33:11.516757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:26:41.487 [2024-04-24 20:33:11.516767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.487 [2024-04-24 20:33:11.516815] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:41.487 [2024-04-24 20:33:11.516838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:41.487 [2024-04-24 20:33:11.516847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:26:41.487 [2024-04-24 20:33:11.516872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:41.487 [2024-04-24 20:33:11.517920] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1460.115 ms, result 0 00:26:41.487 [2024-04-24 20:33:11.530592] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.487 [2024-04-24 20:33:11.546561] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:26:41.487 [2024-04-24 20:33:11.556469] nvmf_rpc.c: 606:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:41.487 [2024-04-24 20:33:11.556711] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:41.487 20:33:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:41.487 20:33:11 -- common/autotest_common.sh@850 -- # return 0 00:26:41.487 20:33:11 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:41.487 20:33:11 -- ftl/common.sh@95 -- # return 0 00:26:41.487 20:33:11 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:26:41.487 20:33:11 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:26:41.487 20:33:11 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:26:41.487 20:33:11 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:41.487 Validate MD5 checksum, iteration 1 00:26:41.487 20:33:11 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:26:41.487 20:33:11 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:41.487 20:33:11 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:41.487 20:33:11 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:41.487 20:33:11 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:41.487 20:33:11 -- ftl/common.sh@154 -- # return 0 00:26:41.487 20:33:11 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:26:41.487 [2024-04-24 20:33:11.689833] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:26:41.487 [2024-04-24 20:33:11.689954] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84099 ] 00:26:41.746 [2024-04-24 20:33:11.860043] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.006 [2024-04-24 20:33:12.098280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.461  Copying: 718/1024 [MB] (718 MBps) Copying: 1024/1024 [MB] (average 690 MBps) 00:26:47.461 00:26:47.461 20:33:17 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:26:47.461 20:33:17 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:48.838 20:33:19 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:48.838 Validate MD5 checksum, iteration 2 00:26:48.838 20:33:19 -- ftl/upgrade_shutdown.sh@103 -- # sum=d262c4861a96b007ac72d67b9cb9f55a 00:26:48.838 20:33:19 -- ftl/upgrade_shutdown.sh@105 -- # [[ d262c4861a96b007ac72d67b9cb9f55a != \d\2\6\2\c\4\8\6\1\a\9\6\b\0\0\7\a\c\7\2\d\6\7\b\9\c\b\9\f\5\5\a ]] 00:26:48.838 20:33:19 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:48.838 20:33:19 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:48.838 20:33:19 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:26:48.838 20:33:19 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:48.838 20:33:19 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:26:48.838 20:33:19 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:26:48.838 20:33:19 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:26:48.838 20:33:19 -- ftl/common.sh@154 -- # return 0 00:26:48.838 20:33:19 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:26:49.096 [2024-04-24 20:33:19.138111] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:26:49.096 [2024-04-24 20:33:19.138513] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84177 ] 00:26:49.356 [2024-04-24 20:33:19.333155] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.356 [2024-04-24 20:33:19.572901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.434  Copying: 710/1024 [MB] (710 MBps) Copying: 1024/1024 [MB] (average 704 MBps) 00:26:53.434 00:26:53.434 20:33:23 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:26:53.434 20:33:23 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:55.359 20:33:25 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:26:55.359 20:33:25 -- ftl/upgrade_shutdown.sh@103 -- # sum=c7f123b2f913ad811dabd6a0b92762e6 00:26:55.359 20:33:25 -- ftl/upgrade_shutdown.sh@105 -- # [[ c7f123b2f913ad811dabd6a0b92762e6 != \c\7\f\1\2\3\b\2\f\9\1\3\a\d\8\1\1\d\a\b\d\6\a\0\b\9\2\7\6\2\e\6 ]] 00:26:55.359 20:33:25 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:26:55.359 20:33:25 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:26:55.359 20:33:25 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:26:55.359 20:33:25 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:26:55.359 20:33:25 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:26:55.359 20:33:25 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:26:55.359 20:33:25 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:26:55.359 20:33:25 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:26:55.359 20:33:25 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:26:55.359 20:33:25 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:26:55.359 20:33:25 -- ftl/common.sh@130 -- # [[ -n 84059 ]] 00:26:55.359 20:33:25 -- ftl/common.sh@131 -- # killprocess 84059 00:26:55.359 20:33:25 -- common/autotest_common.sh@936 -- # '[' -z 84059 ']' 00:26:55.359 20:33:25 -- common/autotest_common.sh@940 -- # kill -0 84059 00:26:55.359 20:33:25 -- common/autotest_common.sh@941 -- # uname 00:26:55.359 20:33:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:55.359 20:33:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84059 00:26:55.359 killing process with pid 84059 00:26:55.359 20:33:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:55.359 20:33:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:55.359 20:33:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84059' 00:26:55.359 20:33:25 -- common/autotest_common.sh@955 -- # kill 84059 00:26:55.359 20:33:25 -- common/autotest_common.sh@960 -- # wait 84059 00:26:55.359 [2024-04-24 20:33:25.376584] app.c: 937:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:56.298 [2024-04-24 20:33:26.457395] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:26:56.298 [2024-04-24 20:33:26.475280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.299 [2024-04-24 20:33:26.475321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:26:56.299 [2024-04-24 20:33:26.475336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:26:56.299 [2024-04-24 20:33:26.475347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.299 [2024-04-24 20:33:26.475373] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:26:56.299 [2024-04-24 20:33:26.478681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.299 [2024-04-24 20:33:26.478709] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:26:56.299 [2024-04-24 20:33:26.478722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.298 ms 00:26:56.299 [2024-04-24 20:33:26.478732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.299 [2024-04-24 20:33:26.478951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.299 [2024-04-24 20:33:26.478964] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:26:56.299 [2024-04-24 20:33:26.478975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:26:56.299 [2024-04-24 20:33:26.478991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.299 [2024-04-24 20:33:26.480050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.299 [2024-04-24 20:33:26.480080] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:26:56.299 [2024-04-24 20:33:26.480091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.044 ms 00:26:56.299 [2024-04-24 20:33:26.480101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.299 [2024-04-24 20:33:26.481031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.299 [2024-04-24 20:33:26.481047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:26:56.299 [2024-04-24 20:33:26.481058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.901 ms 00:26:56.299 [2024-04-24 20:33:26.481067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.299 [2024-04-24 20:33:26.496178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.299 [2024-04-24 20:33:26.496315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:26:56.299 [2024-04-24 20:33:26.496454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.099 ms 00:26:56.299 [2024-04-24 20:33:26.496490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.299 [2024-04-24 20:33:26.504736] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.299 [2024-04-24 20:33:26.504911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:26:56.299 [2024-04-24 20:33:26.505048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8.157 ms 00:26:56.299 [2024-04-24 20:33:26.505088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.299 [2024-04-24 20:33:26.505212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.299 [2024-04-24 20:33:26.505318] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:26:56.299 [2024-04-24 20:33:26.505356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:26:56.299 [2024-04-24 20:33:26.505387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.299 [2024-04-24 20:33:26.521030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.299 [2024-04-24 20:33:26.521161] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:26:56.299 [2024-04-24 20:33:26.521237] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.588 ms 00:26:56.299 [2024-04-24 20:33:26.521270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.558 [2024-04-24 20:33:26.536731] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.558 [2024-04-24 20:33:26.536897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:26:56.558 [2024-04-24 20:33:26.537066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.431 ms 00:26:56.558 [2024-04-24 20:33:26.537102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.558 [2024-04-24 20:33:26.552318] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.558 [2024-04-24 20:33:26.552444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:26:56.559 [2024-04-24 20:33:26.552514] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.180 ms 00:26:56.559 [2024-04-24 20:33:26.552547] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.559 [2024-04-24 20:33:26.567023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.559 [2024-04-24 20:33:26.567142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:26:56.559 [2024-04-24 20:33:26.567209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.414 ms 00:26:56.559 [2024-04-24 20:33:26.567242] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.559 [2024-04-24 20:33:26.567325] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:26:56.559 [2024-04-24 20:33:26.567368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:56.559 [2024-04-24 20:33:26.567431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:26:56.559 [2024-04-24 20:33:26.567477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:26:56.559 [2024-04-24 20:33:26.567575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:56.559 [2024-04-24 20:33:26.567794] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:26:56.559 [2024-04-24 20:33:26.567804] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d6a7871b-4402-48fd-ac10-0a902c381bfb 00:26:56.559 [2024-04-24 20:33:26.567814] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:26:56.559 [2024-04-24 20:33:26.567824] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:26:56.559 [2024-04-24 20:33:26.567834] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:26:56.559 [2024-04-24 20:33:26.567844] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:26:56.559 [2024-04-24 20:33:26.567875] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:26:56.559 [2024-04-24 20:33:26.567886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:26:56.559 [2024-04-24 20:33:26.567896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:26:56.559 [2024-04-24 20:33:26.567905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:26:56.559 [2024-04-24 20:33:26.567914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:26:56.559 [2024-04-24 20:33:26.567925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.559 [2024-04-24 20:33:26.567935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:26:56.559 [2024-04-24 20:33:26.567946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.602 ms 00:26:56.559 [2024-04-24 20:33:26.567956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.559 [2024-04-24 20:33:26.586572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.559 [2024-04-24 20:33:26.586606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:26:56.559 [2024-04-24 20:33:26.586625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 18.623 ms 00:26:56.559 [2024-04-24 20:33:26.586635] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.559 [2024-04-24 20:33:26.586894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:56.559 [2024-04-24 20:33:26.586906] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:26:56.559 [2024-04-24 20:33:26.586917] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.224 ms 00:26:56.559 [2024-04-24 20:33:26.586927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.559 [2024-04-24 20:33:26.652643] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:56.559 [2024-04-24 20:33:26.652700] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:56.559 [2024-04-24 20:33:26.652714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:56.559 [2024-04-24 20:33:26.652724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.559 [2024-04-24 20:33:26.652769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:56.559 [2024-04-24 20:33:26.652780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:56.559 [2024-04-24 20:33:26.652790] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:56.559 [2024-04-24 20:33:26.652799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.559 [2024-04-24 20:33:26.652920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:56.559 [2024-04-24 20:33:26.652935] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:56.559 [2024-04-24 20:33:26.652945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:56.559 [2024-04-24 20:33:26.652960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.559 [2024-04-24 20:33:26.652979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:56.559 [2024-04-24 20:33:26.652989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:56.559 [2024-04-24 20:33:26.653000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:56.559 [2024-04-24 20:33:26.653009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.559 [2024-04-24 20:33:26.771846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:56.559 [2024-04-24 20:33:26.771907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:56.559 [2024-04-24 20:33:26.771927] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:56.559 [2024-04-24 20:33:26.771937] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.819 [2024-04-24 20:33:26.818665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:56.819 [2024-04-24 20:33:26.818718] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:56.819 [2024-04-24 20:33:26.818732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:56.819 [2024-04-24 20:33:26.818743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.819 [2024-04-24 20:33:26.818827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:56.819 [2024-04-24 20:33:26.818839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:56.819 [2024-04-24 20:33:26.818850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:56.819 [2024-04-24 20:33:26.818878] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.819 [2024-04-24 20:33:26.818933] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:56.819 [2024-04-24 20:33:26.818945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:56.819 [2024-04-24 20:33:26.818955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:56.819 [2024-04-24 20:33:26.818982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.819 [2024-04-24 20:33:26.819094] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:56.819 [2024-04-24 20:33:26.819107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:56.819 [2024-04-24 20:33:26.819118] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:56.819 [2024-04-24 20:33:26.819128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.819 [2024-04-24 20:33:26.819169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:56.819 [2024-04-24 20:33:26.819181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:26:56.819 [2024-04-24 20:33:26.819191] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:56.819 [2024-04-24 20:33:26.819200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.819 [2024-04-24 20:33:26.819240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:56.819 [2024-04-24 20:33:26.819251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:56.819 [2024-04-24 20:33:26.819262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:56.819 [2024-04-24 20:33:26.819277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.819 [2024-04-24 20:33:26.819324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:26:56.819 [2024-04-24 20:33:26.819335] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:56.819 [2024-04-24 20:33:26.819345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:26:56.819 [2024-04-24 20:33:26.819355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:56.819 [2024-04-24 20:33:26.819478] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 344.721 ms, result 0 00:26:58.199 20:33:28 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:26:58.199 20:33:28 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:58.199 20:33:28 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:26:58.199 20:33:28 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:26:58.199 20:33:28 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:26:58.199 20:33:28 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:58.199 Remove shared memory files 00:26:58.199 20:33:28 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:26:58.199 20:33:28 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:58.199 20:33:28 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:58.199 20:33:28 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:58.199 20:33:28 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83855 00:26:58.199 20:33:28 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:58.199 20:33:28 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:58.199 ************************************ 00:26:58.199 END TEST ftl_upgrade_shutdown 00:26:58.199 ************************************ 00:26:58.199 00:26:58.199 real 1m33.111s 00:26:58.199 user 2m8.143s 00:26:58.199 sys 0m24.061s 00:26:58.199 20:33:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:58.199 20:33:28 -- common/autotest_common.sh@10 -- # set +x 00:26:58.199 20:33:28 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:26:58.199 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:26:58.199 20:33:28 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:26:58.199 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:26:58.199 20:33:28 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:26:58.199 20:33:28 -- ftl/ftl.sh@14 -- # killprocess 76934 00:26:58.199 20:33:28 -- common/autotest_common.sh@936 -- # '[' -z 76934 ']' 00:26:58.199 20:33:28 -- common/autotest_common.sh@940 -- # kill -0 76934 00:26:58.199 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (76934) - No such process 00:26:58.199 Process with pid 76934 is not found 00:26:58.199 20:33:28 -- common/autotest_common.sh@963 -- # echo 'Process with pid 76934 is not found' 00:26:58.199 20:33:28 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:26:58.199 20:33:28 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84303 00:26:58.199 20:33:28 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:58.199 20:33:28 -- ftl/ftl.sh@20 -- # waitforlisten 84303 00:26:58.199 20:33:28 -- common/autotest_common.sh@817 -- # '[' -z 84303 ']' 00:26:58.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.199 20:33:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.199 20:33:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:58.199 20:33:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.199 20:33:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:58.199 20:33:28 -- common/autotest_common.sh@10 -- # set +x 00:26:58.199 [2024-04-24 20:33:28.302711] Starting SPDK v24.05-pre git sha1 be7d3cb46 / DPDK 23.11.0 initialization... 00:26:58.199 [2024-04-24 20:33:28.302818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84303 ] 00:26:58.458 [2024-04-24 20:33:28.472996] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.716 [2024-04-24 20:33:28.708220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.653 20:33:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:59.653 20:33:29 -- common/autotest_common.sh@850 -- # return 0 00:26:59.653 20:33:29 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:59.912 nvme0n1 00:26:59.912 20:33:29 -- ftl/ftl.sh@22 -- # clear_lvols 00:26:59.912 20:33:29 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:59.912 20:33:29 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:59.912 20:33:30 -- ftl/common.sh@28 -- # stores=fa7b0167-5bbc-44f0-97e8-22f465067706 00:26:59.912 20:33:30 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:59.912 20:33:30 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fa7b0167-5bbc-44f0-97e8-22f465067706 00:27:00.171 20:33:30 -- ftl/ftl.sh@23 -- # killprocess 84303 00:27:00.171 20:33:30 -- common/autotest_common.sh@936 -- # '[' -z 84303 ']' 00:27:00.171 20:33:30 -- common/autotest_common.sh@940 -- # kill -0 84303 00:27:00.171 20:33:30 -- common/autotest_common.sh@941 -- # uname 00:27:00.171 20:33:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:00.171 20:33:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84303 00:27:00.171 killing process with pid 84303 00:27:00.171 20:33:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:00.171 20:33:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:00.171 20:33:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84303' 00:27:00.171 20:33:30 -- common/autotest_common.sh@955 -- # kill 84303 00:27:00.171 20:33:30 -- common/autotest_common.sh@960 -- # wait 84303 00:27:02.705 20:33:32 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:02.963 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:02.963 Waiting for block devices as requested 00:27:02.963 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:03.222 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:03.222 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:03.481 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:08.756 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:08.756 Remove shared memory files 00:27:08.756 20:33:38 -- ftl/ftl.sh@28 -- # remove_shm 00:27:08.756 20:33:38 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:08.756 20:33:38 -- ftl/common.sh@205 -- # rm -f rm -f 00:27:08.756 20:33:38 -- ftl/common.sh@206 -- # rm -f rm -f 00:27:08.756 20:33:38 -- ftl/common.sh@207 -- # rm -f rm -f 00:27:08.756 20:33:38 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:08.756 20:33:38 -- ftl/common.sh@209 -- # rm -f rm -f 00:27:08.756 00:27:08.756 real 10m50.230s 00:27:08.756 user 13m30.464s 00:27:08.756 sys 1m27.264s 00:27:08.756 20:33:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:08.756 20:33:38 -- common/autotest_common.sh@10 -- # set +x 00:27:08.756 ************************************ 00:27:08.756 END TEST ftl 00:27:08.756 ************************************ 00:27:08.756 20:33:38 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:27:08.756 20:33:38 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:27:08.756 20:33:38 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:27:08.756 20:33:38 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:27:08.756 20:33:38 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:27:08.756 20:33:38 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:27:08.756 20:33:38 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:27:08.756 20:33:38 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:27:08.756 20:33:38 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:27:08.756 20:33:38 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:27:08.756 20:33:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:08.757 20:33:38 -- common/autotest_common.sh@10 -- # set +x 00:27:08.757 20:33:38 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:27:08.757 20:33:38 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:27:08.757 20:33:38 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:27:08.757 20:33:38 -- common/autotest_common.sh@10 -- # set +x 00:27:10.663 INFO: APP EXITING 00:27:10.663 INFO: killing all VMs 00:27:10.663 INFO: killing vhost app 00:27:10.663 INFO: EXIT DONE 00:27:10.923 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:11.491 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:27:11.491 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:27:11.491 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:27:11.491 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:27:12.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:12.319 Cleaning 00:27:12.319 Removing: /var/run/dpdk/spdk0/config 00:27:12.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:12.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:12.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:12.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:12.319 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:12.319 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:12.319 Removing: /var/run/dpdk/spdk0 00:27:12.319 Removing: /var/run/dpdk/spdk_pid61297 00:27:12.319 Removing: /var/run/dpdk/spdk_pid61563 00:27:12.319 Removing: /var/run/dpdk/spdk_pid61823 00:27:12.319 Removing: /var/run/dpdk/spdk_pid61937 00:27:12.319 Removing: /var/run/dpdk/spdk_pid61993 00:27:12.319 Removing: /var/run/dpdk/spdk_pid62140 00:27:12.319 Removing: /var/run/dpdk/spdk_pid62158 00:27:12.319 Removing: /var/run/dpdk/spdk_pid62371 00:27:12.319 Removing: /var/run/dpdk/spdk_pid62480 00:27:12.319 Removing: /var/run/dpdk/spdk_pid62590 00:27:12.319 Removing: /var/run/dpdk/spdk_pid62717 00:27:12.319 Removing: /var/run/dpdk/spdk_pid62838 00:27:12.319 Removing: /var/run/dpdk/spdk_pid62887 00:27:12.319 Removing: /var/run/dpdk/spdk_pid62933 00:27:12.319 Removing: /var/run/dpdk/spdk_pid63006 00:27:12.579 Removing: /var/run/dpdk/spdk_pid63143 00:27:12.579 Removing: /var/run/dpdk/spdk_pid63587 00:27:12.579 Removing: /var/run/dpdk/spdk_pid63667 00:27:12.579 Removing: /var/run/dpdk/spdk_pid63745 00:27:12.579 Removing: /var/run/dpdk/spdk_pid63772 00:27:12.579 Removing: /var/run/dpdk/spdk_pid63930 00:27:12.579 Removing: /var/run/dpdk/spdk_pid63947 00:27:12.579 Removing: /var/run/dpdk/spdk_pid64110 00:27:12.579 Removing: /var/run/dpdk/spdk_pid64126 00:27:12.579 Removing: /var/run/dpdk/spdk_pid64209 00:27:12.579 Removing: /var/run/dpdk/spdk_pid64227 00:27:12.579 Removing: /var/run/dpdk/spdk_pid64301 00:27:12.579 Removing: /var/run/dpdk/spdk_pid64324 00:27:12.579 Removing: /var/run/dpdk/spdk_pid64526 00:27:12.579 Removing: /var/run/dpdk/spdk_pid64568 00:27:12.579 Removing: /var/run/dpdk/spdk_pid64659 00:27:12.579 Removing: /var/run/dpdk/spdk_pid64749 00:27:12.579 Removing: /var/run/dpdk/spdk_pid64790 00:27:12.579 Removing: /var/run/dpdk/spdk_pid64886 00:27:12.579 Removing: /var/run/dpdk/spdk_pid64938 00:27:12.579 Removing: /var/run/dpdk/spdk_pid64995 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65051 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65107 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65163 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65219 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65271 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65321 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65379 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65430 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65486 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65541 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65589 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65646 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65697 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65753 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65812 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65865 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65918 00:27:12.579 Removing: /var/run/dpdk/spdk_pid65975 00:27:12.579 Removing: /var/run/dpdk/spdk_pid66062 00:27:12.579 Removing: /var/run/dpdk/spdk_pid66197 00:27:12.579 Removing: /var/run/dpdk/spdk_pid66382 00:27:12.579 Removing: /var/run/dpdk/spdk_pid66493 00:27:12.579 Removing: /var/run/dpdk/spdk_pid66539 00:27:12.579 Removing: /var/run/dpdk/spdk_pid67000 00:27:12.579 Removing: /var/run/dpdk/spdk_pid67109 00:27:12.579 Removing: /var/run/dpdk/spdk_pid67231 00:27:12.579 Removing: /var/run/dpdk/spdk_pid67290 00:27:12.579 Removing: /var/run/dpdk/spdk_pid67325 00:27:12.579 Removing: /var/run/dpdk/spdk_pid67412 00:27:12.579 Removing: /var/run/dpdk/spdk_pid68062 00:27:12.579 Removing: /var/run/dpdk/spdk_pid68109 00:27:12.579 Removing: /var/run/dpdk/spdk_pid68614 00:27:12.579 Removing: /var/run/dpdk/spdk_pid68723 00:27:12.579 Removing: /var/run/dpdk/spdk_pid68847 00:27:12.579 Removing: /var/run/dpdk/spdk_pid68912 00:27:12.579 Removing: /var/run/dpdk/spdk_pid68947 00:27:12.579 Removing: /var/run/dpdk/spdk_pid68982 00:27:12.579 Removing: /var/run/dpdk/spdk_pid70936 00:27:12.838 Removing: /var/run/dpdk/spdk_pid71089 00:27:12.838 Removing: /var/run/dpdk/spdk_pid71097 00:27:12.838 Removing: /var/run/dpdk/spdk_pid71116 00:27:12.838 Removing: /var/run/dpdk/spdk_pid71155 00:27:12.838 Removing: /var/run/dpdk/spdk_pid71159 00:27:12.838 Removing: /var/run/dpdk/spdk_pid71171 00:27:12.838 Removing: /var/run/dpdk/spdk_pid71216 00:27:12.838 Removing: /var/run/dpdk/spdk_pid71220 00:27:12.838 Removing: /var/run/dpdk/spdk_pid71232 00:27:12.838 Removing: /var/run/dpdk/spdk_pid71277 00:27:12.838 Removing: /var/run/dpdk/spdk_pid71281 00:27:12.838 Removing: /var/run/dpdk/spdk_pid71293 00:27:12.838 Removing: /var/run/dpdk/spdk_pid72683 00:27:12.838 Removing: /var/run/dpdk/spdk_pid72793 00:27:12.838 Removing: /var/run/dpdk/spdk_pid72938 00:27:12.838 Removing: /var/run/dpdk/spdk_pid73061 00:27:12.838 Removing: /var/run/dpdk/spdk_pid73183 00:27:12.838 Removing: /var/run/dpdk/spdk_pid73298 00:27:12.838 Removing: /var/run/dpdk/spdk_pid73445 00:27:12.838 Removing: /var/run/dpdk/spdk_pid73525 00:27:12.838 Removing: /var/run/dpdk/spdk_pid73676 00:27:12.838 Removing: /var/run/dpdk/spdk_pid74060 00:27:12.838 Removing: /var/run/dpdk/spdk_pid74106 00:27:12.838 Removing: /var/run/dpdk/spdk_pid74585 00:27:12.838 Removing: /var/run/dpdk/spdk_pid74777 00:27:12.838 Removing: /var/run/dpdk/spdk_pid74890 00:27:12.838 Removing: /var/run/dpdk/spdk_pid75011 00:27:12.838 Removing: /var/run/dpdk/spdk_pid75076 00:27:12.838 Removing: /var/run/dpdk/spdk_pid75111 00:27:12.838 Removing: /var/run/dpdk/spdk_pid75402 00:27:12.838 Removing: /var/run/dpdk/spdk_pid75473 00:27:12.838 Removing: /var/run/dpdk/spdk_pid75559 00:27:12.838 Removing: /var/run/dpdk/spdk_pid75975 00:27:12.838 Removing: /var/run/dpdk/spdk_pid76128 00:27:12.838 Removing: /var/run/dpdk/spdk_pid76934 00:27:12.838 Removing: /var/run/dpdk/spdk_pid77081 00:27:12.838 Removing: /var/run/dpdk/spdk_pid77288 00:27:12.838 Removing: /var/run/dpdk/spdk_pid77392 00:27:12.838 Removing: /var/run/dpdk/spdk_pid77752 00:27:12.838 Removing: /var/run/dpdk/spdk_pid78010 00:27:12.838 Removing: /var/run/dpdk/spdk_pid78391 00:27:12.838 Removing: /var/run/dpdk/spdk_pid78596 00:27:12.838 Removing: /var/run/dpdk/spdk_pid78725 00:27:12.838 Removing: /var/run/dpdk/spdk_pid78796 00:27:12.838 Removing: /var/run/dpdk/spdk_pid78931 00:27:12.838 Removing: /var/run/dpdk/spdk_pid78967 00:27:12.838 Removing: /var/run/dpdk/spdk_pid79044 00:27:12.838 Removing: /var/run/dpdk/spdk_pid79236 00:27:12.838 Removing: /var/run/dpdk/spdk_pid79476 00:27:12.838 Removing: /var/run/dpdk/spdk_pid79850 00:27:12.838 Removing: /var/run/dpdk/spdk_pid80241 00:27:12.838 Removing: /var/run/dpdk/spdk_pid80653 00:27:12.838 Removing: /var/run/dpdk/spdk_pid81096 00:27:12.838 Removing: /var/run/dpdk/spdk_pid81238 00:27:12.838 Removing: /var/run/dpdk/spdk_pid81335 00:27:12.838 Removing: /var/run/dpdk/spdk_pid81910 00:27:12.838 Removing: /var/run/dpdk/spdk_pid81984 00:27:12.838 Removing: /var/run/dpdk/spdk_pid82400 00:27:12.838 Removing: /var/run/dpdk/spdk_pid82771 00:27:12.838 Removing: /var/run/dpdk/spdk_pid83228 00:27:13.098 Removing: /var/run/dpdk/spdk_pid83379 00:27:13.098 Removing: /var/run/dpdk/spdk_pid83436 00:27:13.098 Removing: /var/run/dpdk/spdk_pid83504 00:27:13.098 Removing: /var/run/dpdk/spdk_pid83568 00:27:13.098 Removing: /var/run/dpdk/spdk_pid83638 00:27:13.098 Removing: /var/run/dpdk/spdk_pid83855 00:27:13.098 Removing: /var/run/dpdk/spdk_pid83901 00:27:13.098 Removing: /var/run/dpdk/spdk_pid83980 00:27:13.098 Removing: /var/run/dpdk/spdk_pid84059 00:27:13.098 Removing: /var/run/dpdk/spdk_pid84099 00:27:13.098 Removing: /var/run/dpdk/spdk_pid84177 00:27:13.098 Removing: /var/run/dpdk/spdk_pid84303 00:27:13.098 Clean 00:27:13.098 20:33:43 -- common/autotest_common.sh@1437 -- # return 0 00:27:13.098 20:33:43 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:27:13.098 20:33:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:13.098 20:33:43 -- common/autotest_common.sh@10 -- # set +x 00:27:13.357 20:33:43 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:27:13.357 20:33:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:13.357 20:33:43 -- common/autotest_common.sh@10 -- # set +x 00:27:13.357 20:33:43 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:13.357 20:33:43 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:27:13.357 20:33:43 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:27:13.357 20:33:43 -- spdk/autotest.sh@389 -- # hash lcov 00:27:13.357 20:33:43 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:13.357 20:33:43 -- spdk/autotest.sh@391 -- # hostname 00:27:13.357 20:33:43 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:27:13.357 geninfo: WARNING: invalid characters removed from testname! 00:27:39.910 20:34:06 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:39.910 20:34:09 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:41.812 20:34:11 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:43.718 20:34:13 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:46.293 20:34:15 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:48.222 20:34:18 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:50.127 20:34:20 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:50.127 20:34:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:50.386 20:34:20 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:50.386 20:34:20 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.386 20:34:20 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.386 20:34:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.386 20:34:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.386 20:34:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.386 20:34:20 -- paths/export.sh@5 -- $ export PATH 00:27:50.386 20:34:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.386 20:34:20 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:50.386 20:34:20 -- common/autobuild_common.sh@435 -- $ date +%s 00:27:50.386 20:34:20 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713990860.XXXXXX 00:27:50.386 20:34:20 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713990860.VRMuNw 00:27:50.386 20:34:20 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:27:50.386 20:34:20 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:27:50.386 20:34:20 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:27:50.386 20:34:20 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:50.386 20:34:20 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:50.386 20:34:20 -- common/autobuild_common.sh@451 -- $ get_config_params 00:27:50.386 20:34:20 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:27:50.386 20:34:20 -- common/autotest_common.sh@10 -- $ set +x 00:27:50.386 20:34:20 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:27:50.386 20:34:20 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:27:50.386 20:34:20 -- pm/common@17 -- $ local monitor 00:27:50.386 20:34:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:50.386 20:34:20 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=86033 00:27:50.386 20:34:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:50.386 20:34:20 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=86035 00:27:50.386 20:34:20 -- pm/common@26 -- $ sleep 1 00:27:50.386 20:34:20 -- pm/common@21 -- $ date +%s 00:27:50.386 20:34:20 -- pm/common@21 -- $ date +%s 00:27:50.386 20:34:20 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713990860 00:27:50.386 20:34:20 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713990860 00:27:50.386 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713990860_collect-vmstat.pm.log 00:27:50.386 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713990860_collect-cpu-load.pm.log 00:27:51.323 20:34:21 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:27:51.323 20:34:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:51.323 20:34:21 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:51.323 20:34:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:51.323 20:34:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:51.323 20:34:21 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:51.323 20:34:21 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:51.323 20:34:21 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:51.323 20:34:21 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:51.323 20:34:21 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:51.323 20:34:21 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:27:51.323 20:34:21 -- pm/common@30 -- $ signal_monitor_resources TERM 00:27:51.323 20:34:21 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:27:51.323 20:34:21 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:51.323 20:34:21 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:27:51.323 20:34:21 -- pm/common@45 -- $ pid=86040 00:27:51.323 20:34:21 -- pm/common@52 -- $ sudo kill -TERM 86040 00:27:51.323 20:34:21 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:51.323 20:34:21 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:27:51.323 20:34:21 -- pm/common@45 -- $ pid=86041 00:27:51.323 20:34:21 -- pm/common@52 -- $ sudo kill -TERM 86041 00:27:51.581 + [[ -n 5140 ]] 00:27:51.581 + sudo kill 5140 00:27:51.590 [Pipeline] } 00:27:51.609 [Pipeline] // timeout 00:27:51.614 [Pipeline] } 00:27:51.628 [Pipeline] // stage 00:27:51.633 [Pipeline] } 00:27:51.648 [Pipeline] // catchError 00:27:51.655 [Pipeline] stage 00:27:51.657 [Pipeline] { (Stop VM) 00:27:51.668 [Pipeline] sh 00:27:51.944 + vagrant halt 00:27:55.239 ==> default: Halting domain... 00:28:01.828 [Pipeline] sh 00:28:02.110 + vagrant destroy -f 00:28:05.410 ==> default: Removing domain... 00:28:05.424 [Pipeline] sh 00:28:05.706 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:28:05.715 [Pipeline] } 00:28:05.731 [Pipeline] // stage 00:28:05.737 [Pipeline] } 00:28:05.753 [Pipeline] // dir 00:28:05.758 [Pipeline] } 00:28:05.775 [Pipeline] // wrap 00:28:05.781 [Pipeline] } 00:28:05.796 [Pipeline] // catchError 00:28:05.804 [Pipeline] stage 00:28:05.807 [Pipeline] { (Epilogue) 00:28:05.820 [Pipeline] sh 00:28:06.103 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:11.391 [Pipeline] catchError 00:28:11.392 [Pipeline] { 00:28:11.406 [Pipeline] sh 00:28:11.690 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:11.690 Artifacts sizes are good 00:28:11.700 [Pipeline] } 00:28:11.718 [Pipeline] // catchError 00:28:11.730 [Pipeline] archiveArtifacts 00:28:11.737 Archiving artifacts 00:28:11.847 [Pipeline] cleanWs 00:28:11.857 [WS-CLEANUP] Deleting project workspace... 00:28:11.858 [WS-CLEANUP] Deferred wipeout is used... 00:28:11.863 [WS-CLEANUP] done 00:28:11.865 [Pipeline] } 00:28:11.884 [Pipeline] // stage 00:28:11.889 [Pipeline] } 00:28:11.904 [Pipeline] // node 00:28:11.910 [Pipeline] End of Pipeline 00:28:11.953 Finished: SUCCESS